Stanford Computer Science Department Technical Reports from the 1970
The
authors have given permission to make their technical reports available
on this server. If a report was published in print and is not here it
may be that the author published it elsewhere.
Report Number: CS-TR-70-146
Institution: Stanford University, Department of Computer Science
Title: Roundoff error analysis of the fast Fourier transform
Author: Ramos, George U.
Date: February 1970
Abstract:
This paper presents an analysis of roundoff errors occurring in the
floating-point computation of the fast Fourier transform. Upper bounds
are derived for the ratios of the root-mean-square (RMS) and maximum
roundoff errors in the output data to the RMS value of the input data
for both single and multidimensional transformations. These bounds are
compared experimentally with actual roundoff errors.
CS-TR-70-146
Report Number: CS-TR-70-147
Institution: Stanford University, Department of Computer Science
Title: Pitfalls in computation, or why a math book isn't enough
Author: Forsythe, George E.
Date: January 1970
Abstract:
The floating-point number system is contrasted with the real numbers.
The author then illustrates the variety of computational pitfalls a
person can fall into who merely translates information gained from pure
mathematics courses into computer programs. Examples include summing a
Taylor series, solving a quadratic equation, solving linear algebraic
systems, solving ordinary and partial differential equations, and
finding polynomial zeros. It is concluded that mathematics courses
should be taught with a greater awareness of automatic computation.
CS-TR-70-147
Report Number: CS-TR-70-150
Institution: Stanford University, Department of Computer Science
Title: Elementary proof of the Wielandt-Hoffman Theorem and of its generalization
Author: Wilkinson, James H.
Date: January 1970
Abstract:
An elementary proof is given of the Wielandt-Hoffman Theorem for normal
matrices and of a generalization of this theorem. The proof makes no
direct appeal to results from linear-programming theory.
CS-TR-70-150
Report Number: CS-TR-70-151
Institution: Stanford University, Department of Computer Science
Title:
"On the Properties of the Derivatives of the Solutions of Laplace's
Equation and the Errors of the Method of Finite Differences for
Boundary Values in $C_2$ and $C_{1,1}$" by E. A. Volkov
Author: Volkov, E. A.
Author: Forsythe, George E.
Date: January 1970
Abstract:
If a function u is harmonic in a circular disk and its boundary values
are twice continuously differentiable, u need not have bounded second
derivatives in the open disk. For the Dirichlet problem for Laplace's
equation in a more general two-dimensional region the discretization
error of the ordinary method of finite differences is studied, when
Collatz's method of linear interpolation is used at the boundary. If
the boundary of the region has a tangent line whose angle satisfies a
Lipschitz condition, and if the boundary values have a first derivative
satisfying a Lipschitz condition, then the discretization error is
shown to be of order $h^2 ln h^{-1}$. This bound is shown to be sharp.
By a different method of interpolation at the boundary one can improve
the bound to o($h^2$). There are other similar results. Translated by
G. E. Forsythe.
CS-TR-70-151
Report Number: CS-TR-70-155
Institution: Stanford University, Department of Computer Science
Title: The method of odd/even reduction and factorization with application to Poisson's equation, part II
Author: Buzbee, B. L.
Author: Golub, Gene H.
Author: Nielson, C. W.
Date: March 1970
Abstract:
In this paper, we derive and generalize the methods of Buneman for
solving elliptic partial difference equations in a rectangular region.
We show why the Buneman methods lead to numerically accurate solutions
whereas the CORF algorithm may be numerically unstable. Several
numerical examples are given and discussed.
CS-TR-70-155
Report Number: CS-TR-70-156
Institution: Stanford University, Department of Computer Science
Title: On a model for computing round-off error of a sum
Author: Dantzig, George B.
Date: March 1970
Abstract: No abstract available.
CS-TR-70-156
Report Number: CS-TR-70-157
Institution: Stanford University, Department of Computer Science
Title: Algorithms for matrix multiplication
Author: Brent, Richard P.
Date: March 1970
Abstract:
Strassen's and Winograd's algorithms for matrix multiplication are
investigated and compared with the normal algorithm. Floating-point
error bounds are obtained, and it is shown that scaling is essential
for numerical accuracy using Winograd's method. In practical cases
Winograd's method appears to be slightly faster than the other two
methods, but the gain is, at most, about 20%. Finally, an attempt to
generalize Strassen's method is described.
CS-TR-70-157
Report Number: CS-TR-70-159
Institution: Stanford University, Department of Computer Science
Title: The use of direct methods for the solution of the discrete Poisson equation on non-rectangular regions
Author: George, John Alan
Date: June 1970
Abstract:
Some direct and iterative schemes are presented for solving a standard
finite-difference scheme for Poisson's equation on a two-dimensional
bounded region R with Dirichlet conditions specified on the boundary
$\delta$R. These procedures make use of special-purpose direct methods
for solving rectangular Poisson problems. The region is imbedded in a
rectangle and a uniform mesh is superimposed on it. The usual
five-point Poisson difference operator is applied over the whole
rectangle, yielding a block-tridiagonal system of equations. The
original problem, however, determines only the elements of the
right-hand side which correspond to grid points lying within $\delta$R;
the remaining elements can be treated as parameters. The iterative
algorithms construct a sequence of right-hand sides in such a way that
the corresponding sequence of solutions on the rectangle converges to
the solution of the imbedded problem.
CS-TR-70-159
Report Number: CS-TR-70-160
Institution: Stanford University, Department of Computer Science
Title: A model for parallel computer systems
Author: Bredt, Thomas H.
Author: McCluskey, Edward J.
Date: April 1970
Abstract:
A flow table model is defined for parallel computer systems. In this
model, fundamental-mode flow tables are used to describe the operation
of system componenets, which may be programs or circuits. Components
communicate by changing the values on interconnecting lines which carry
binary level signals. It is assumed that there is no bound on the time
for value changes to propagate over the interconnecting lines. Given
this delay assumption, it is necessary to specify a mode of operation
for system components such that input changes which arrive while a
component is unstable do not affect the operation of the component.
Such a mode of operation is specified. Using the flow table model, a
new control algorithm for the two-process mutual exclusion problem is
designed. This algorithm does not depend on the exclusive execution of
any primitive operations used in its implementation. A circuit
implementation of the control algorithm is described.
CS-TR-70-160
Report Number: CS-TR-70-162
Institution: Stanford University, Department of Computer Science
Title: Numerical techniques in mathematical programming
Author: Bartels, Richard H.
Author: Golub, Gene H.
Author: Saunders, Michael A.
Date: May 1970
Abstract:
The application of numerically stable matrix decompositions to
minimization problems involving linear constraints is discussed and
shown to be feasible without undue loss of efficiency. Part A describes
computation and updating of the product-form of the LU decomposition of
a matrix and shows it can be applied to solving linear systems at least
as efficiently as standard techniques using the product-form of the
inverse. Part B discusses orthogonalization via Householder
transformations, with applications to least squares and quadratic
programming algorithms based on the principal pivoting method of Cottle
and Dantzig. Part C applies the singular value decomposition to the
nonlinear least squares problem and discusses related eigenvalue
problems.
CS-TR-70-162
Report Number: CS-TR-70-163
Institution: Stanford University, Department of Computer Science
Title: An algorithm for floating-point accumulation of sums with small relative error
Author: Malcolm, Michael A.
Date: June 1970
Abstract:
A practical algorithm for floating-point accumulation is presented.
Through the use of multiple accumulators, errors due to cancellation
are avoided. An example in Fortran is included. An error analysis
providing a sharp bound on the relative error is also given.
CS-TR-70-163
Report Number: CS-TR-70-164
Institution: Stanford University, Department of Computer Science
Title: "Estimates of the Roundoff Error in the Solution of a System of Conditional Equations" by V. I. Gordonova
Author: Gordonova, V. I.
Author: Kaufman, Linda C.
Date: June 1970
Abstract:
Using backward error analysis, this paper compares the roundoff error
in the least-squares solution of a system of conditional equations Ax=f
by two different methods. The first one entails solving the normal
equations $A^T$Ax=$A^T$f and the second is one proposed by Faddeev,
Faddeeva, and Kublanovskaya in 1966. This latter method involves
multiplying the system by orthogonal matrices to transform the matrix A
into upper triangular form. Translated by Linda Kaufman.
CS-TR-70-164
Report Number: CS-TR-70-165
Institution: Stanford University, Department of Computer Science
Title: The scheduling of n tasks with m operations on two processors
Author: Bauer, Henry R.
Author: Stone, Harold S.
Date: July 1970
Abstract:
The job shop problem is one scheduling problem for which no efficient
algorithm exists. That is, no algorithm is known in which the number of
computational steps grow algebraically as the problem enlarges. This
paper presents a discussion of the problem of scheduling N tasks on two
processors when each task consists of three operations. The operations
of each task must be performed in order and among the processors. We
analyze this problem through four sub-problems. Johnson's scheduling
algorithm is generalized to solve two of these sub-problems, and
functional equation algorithms are used to solve the remaining two
problems. Except for one case, the algorithms are efficient. The
exceptional case has been labelled the "core" problem and the
difficulties are described.
CS-TR-70-165
Report Number: CS-TR-70-170
Institution: Stanford University, Department of Computer Science
Title: Analysis and synthesis of concurrent sequential programs
Author: Bredt, Thomas H.
Date: May 1970
Abstract:
This paper presents analysis and synthesis procedures for a class of
sequential programs. These procedures aid in the design of programs for
parallel computer systems. In particular, the interactions of a given
program with other programs or circuits in a system can be described
precisely. The basis for this work is a model for parallel computer
systems in which the operation of each component is described by a flow
table and the components interact by changing values on interconnecting
lines. The details of this model are discussed in another paper
[Stanford University Department of Computer Science report
STAN-CS-70-160]. The analysis procedure produces a flow table
description of a program. In program synthesis, a flow table
description is converted to a sequential program. Using flow table
design procedures, a control program for the two-program mutual
exclusion problem is produced.
CS-TR-70-170
Report Number: CS-TR-70-171
Institution: Stanford University, Department of Computer Science
Title: A survey of models for parallel computing
Author: Bredt, Thomas H.
Date: August 1970
Abstract:
The work of Adams, Karp and Miller, Luconi, and Rodriguez on formal
models for parallel computations and computer systems is reviewed. A
general definition of a parallel schema is given so that the
similarities and differences of the models can be discussed. Primary
emphasis is on the control structures used to achieve parallel
operation and on properties of the models such as determinacy and
equivalence. Decidable and undecidable properties are summarized.
CS-TR-70-171
Report Number: CS-TR-70-172
Institution: Stanford University, Department of Computer Science
Title: Analysis of parallel systems
Author: Bredt, Thomas H.
Date: August 1970
Abstract:
A formal analysis procedure for parallel computer systems is presented.
The flow table model presented in an earlier paper [Stanford University
Department of Computer Science report STAN-CS-70-160] is used to
describe a system. Each component to the system is described by a
completely specified fundamental-mode flow table. All delays in a
parallel system are assumed to be finite. Component delays are assumed
to be bounded and line delays unbounded. The concept of an output
hazard is introduced to account for the effects of line delay and the
lack of synchronization among components. Necessary and sufficient
conditions for the absence of output hazards are given. The state of a
parallel system is defined by the present internal state and input
state of each component. The operation of the system is described by a
system state graph which specifies all possible state transitions for a
specified initial system state. A procedure for constructing the system
state graph is given. The analysis procedure may be summarized as
follows. A problem is stated in terms of restrictions on system
operation. A parallel system is said to operate correctly with respect
to the given problem if the associated restrictions are always
satisfied. The restrictions specify either forbidden system states,
which are never to be entered during the operation of the system, or
forbidden system state sequences, which must never appear during system
operation. The restrictions are tested by examining the system state
graph. A parallel system for the two-process mutual exclusion problem
is analyzed and the system is shown to operate correctly with respect
to this problem. Finally, the conditions of determinacy and output
functionality, which have been used in other models of parallel
computing, are discussed as they relate to correct solutions to the
mutual exclusion problem.
CS-TR-70-172
Report Number: CS-TR-70-173
Institution: Stanford University, Department of Computer Science
Title: The mutual exclusion problem
Author: Bredt, Thomas H.
Date: August 1970
Abstract:
This paper discusses how n components, which may be programs or
circuits, in a computer system can be controlled so that (1) at most
one component may perform a designated "critical" operation at any
instant and (2) if one component wants to perform its critical
operation, it is eventually allowed to do so. This control problem is
known as the mutual exclusion or interlock problem. A summary of the
flow table model [Stanford University Department of Computer Science
report STAN-CS-70-160] for computer systems is given. In this model, a
control algorithm is represented by a flow table. The number of
internal states in the control flow table is used as a measure of the
complexity of control algorithms. A lower bound of n + 1 internal
states is shown to be necessary if the mutual exclusion problem is to
be solved. Procedures to generate control flow tables for the mutual
exclusion problem which require the minimum number of internal states
are described and it is proved that these procedures given correct
control solutions. Other so-called "unbiased" algorithms are described
which require 2.n! internal states but break ties in the case of
multiple requests in favor of the component that least recently
executed its critical operation. The paper concludes with a discussion
of the tradeoffs between central and distributed control algorithms.
CS-TR-70-173
Report Number: CS-TR-70-174
Institution: Stanford University, Department of Computer Science
Title: Towards automatic program synthesis
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: July 1970
Abstract:
An elementary outline of the theorem-proving approach to automatic
program synthesis is given, without dwelling on technical details. The
method is illustrated by the automatic construction of both recursive
and iterative programs operating on natural numbers, lists, and trees.
In order to construct a program satisfying certain specifications, a
theorem induced by those specifications is proved, and the desired
program is extracted from the proof. The same technique is applied to
transform recursively defined functions into iterative programs,
frequently with a major gain in efficiency. It is emphasized that in
order to construct a program with loops or with recursion, the
principle of mathematical induction must be applied. The relation
between the version of the induction rule used and the form of the
program constructed is explored in some detail.
CS-TR-70-174
Report Number: CS-TR-70-175
Institution: Stanford University, Department of Computer Science
Title: A description and comparison of subroutines for computing Euclidean inner products on the IBM 360
Author: Malcolm, Michael A.
Date: October 1970
Abstract:
Several existing subroutines and an Algol W procedure for computing
inner products on the IBM 360, using more precision than long, are
described and evaluated. Error bounds (when they exist) and execution
timing tests are included.
CS-TR-70-175
Report Number: CS-TR-70-176
Institution: Stanford University, Department of Computer Science
Title: On generality and problem solving: a case study using the DENDRAL program
Author: Feigenbaum, Edward A.
Author: Buchanan, Bruce G.
Author: Lederberg, Joshua
Date: August 1970
Abstract:
Heuristic DENDRAL is a computer program written to solve problems of
inductive inference in organic chemistry. This paper will use the
design of Heuristic DENDRAL and its performance on different problems
for a discussion of the following topics: 1. the design for generality;
2. the performance problems attendant upon too much generality; 3. the
coupling of expertise to the general problem solving processes; 4. the
symbiotic relationship between generality and expertness, and the
implications of this symbiosis for the study and design of problem
solving systems. We conclude the paper with a view of the design for a
general problem solver that is a variant of the "big switch" theory of
generality.
CS-TR-70-176
Report Number: CS-TR-70-178
Institution: Stanford University, Department of Computer Science
Title: Research in the Computer Science Department and selected other research in computing at Stanford University
Author: Forsythe, George E.
Author: Miller, William F.
Date: October 1970
Abstract:
The research program of the Computer Science Department can perhaps be
best summarized in terms of its research projects. The chart on page ii
lists the projects and the participation by faculty and students. The
sections following the chart provide descriptions of the individual
projects. There are a number of projects in other schools or
departments which are making significant contributions to computer
science; and these add to the total computer environment. Descriptions
of a few of these projects are also included with this report. This
list of projects outside of Computer Science does not purport to be
complete or even representative.
CS-TR-70-178
Report Number: CS-TR-70-179
Institution: Stanford University, Department of Computer Science
Title: MLISP
Author: Smith, David Canfield
Date: October 1970
Abstract:
MLISP is a high level list-processing and symbol-manipulation language
based on the programming language LISP. MLISP programs are translated
into LISP programs and then executed or compiled. MLISP exists for two
purposes: (1) to facilitate the writing and understanding of LISP
programs; (2) to remedy certain important deficiencies in the
list-processing ability of LISP.
CS-TR-70-179
Report Number: CS-TR-70-183
Institution: Stanford University, Department of Computer Science
Title: Machine learning through signature trees: applications to human speech
Author: White, George M.
Date: October 1970
Abstract:
Signature tree "machine learning", pattern recognition heuristics are
investigated for the specific problem of computer recognition of human
speech. When the data base of given utterances is insufficient to
establish trends with confidence, a large number of feature extractors
must be employed and "recognition" of an unknown pattern made by
comparing its feature values with those of known patterns. When the
data base is replete, a "signature" tree can be constructed and
recognition can be achieved by the evaluation of a select few features.
Learning results from selecting an optimal minimal set of features to
achieve recognition. Properties of signature trees and the heuristics
for this type of learning are of primary interest in this exposition.
CS-TR-70-183
Report Number: CS-TR-70-184
Institution: Stanford University, Department of Computer Science
Title: A note on a conjecture of L. J. Mordell
Author: Malcolm, Michael A.
Date: November 1970
Abstract:
A computer proof is described for a previously unsolved problem
concerning the inequality $\sum{i=1}{n} x_i/(x_{i+1}\ + x_{i+2}) \geq\
frac{n}{2}$.
CS-TR-70-184
Report Number: CS-TR-70-185
Institution: Stanford University, Department of Computer Science
Title: Graph program simulation
Author: Nelson, Edward C.
Date: October 1970
Abstract:
This reports the simulation of a parallel processing system based on a
directed graph representation of parallel computations. The graph
representation is based on the model developed by Duane Adams in which
programs are written as directed graphs whose nodes represent
operations and whose edges represent data flow. The first part of the
report describes a simulator which interprets these graph programs. The
second part describes the use of the simulator in a hypothetical
environment which has an unlimited number of processors and an
unlimited amount of memory. Three programs, a trapezoidal quadrature, a
sort and a matrix multiplication, were used to study the effect of
varying the relative speed of primitive operations on computation time
with problem size. The system was able to achieve a high degree of
parallelism. For example, the simulator multiplied two n by n matrices
in a simulated time proportional to n.
CS-TR-70-185
Report Number: CS-TR-70-187
Institution: Stanford University, Department of Computer Science
Title: MPL, Mathematical Programming Language: specification manual for Committee review
Author: Eisenstat, Stanley C.
Author: Magnanti, Thomas L.
Author: Maier, Steven F.
Author: McGrath, Michael B.
Author: Nicholson, Vincent J.
Author: Riedl, Christiane
Author: Dantzig, George B.
Date: November 1970
Abstract:
Mathematical Programming Language (MPL) is intended as a highly
readable, user oriented, programming tool for use in the writing and
testing of mathematical algorithms, in particular experimental
algorithms for solving large-scale linear programs. It combines the
simplicity of standard mathematical notation with the power of complex
data structures. Variables may be implicitly introduced into a program
by their use in the statement in which they first appear. No formal
defining statement is necessary. Statements of the "let" and "where"
type are part of the language. Included within the allowable data
structures of MPL are matrices, partitioned matrices, and
multidimensional arrays. Ordered sets are included as vectors with
their constructs closely paralleling those found in set theory.
Allocation of storage is dynamic, thereby eliminating the need for a
data manipulating subset of the language, as is characteristic of most
high level scientific programming languages. This report summarizes the
progress that has been made to date in developing MPL. It contains a
specification manual, examples of the application of the language, and
the future directions and goals of the project. A version of MPL,
called MPL/70, has been implemented using PL/I as a translator. This
will be reported separately. Until fully implemented, MPL is expected
to serve primarily as a highly readable communication language for
mathematical algorithms.
CS-TR-70-187
Report Number: CS-TR-71-188
Institution: Stanford University, Department of Computer Science
Title: The translation of 'go to' programs to 'while' programs
Author: Ashcroft, Edward A.
Author: Manna, Z ohar
Date: January 1971
Abstract:
In this paper we show that every flowchart program can be written
without $underline{go to}$ statements by using $underline{while}$
statements. The main idea is to introduce new variables to preserve the
values of certain variables at particular points in the program; or
alternatively, to introduce special boolean variables to keep
information about the course of the computation. The 'while' programs
produced yield the same final results as the original flowchart program
but need not perform computations in exactly the same way. However, the
new programs do preserve the 'topology' of the original flowchart
program, and are of the same order of efficiency. We also show that
this cannot be done in general without adding variables.
CS-TR-71-188
Report Number: CS-TR-71-189
Institution: Stanford University, Department of Computer Science
Title: Mathematical theory of partial correctness
Author: Manna, Z ohar
Date: January 1971
Abstract:
In this work we show that it is possible to express most properties
regularly observed in algorithms in terms of 'partial correctness'
(i.e., the property that the final results of the algorithm, if any,
satisfy some given input-output relation). This result is of special
interest since 'partial correctness' has already been formulated in
predicate calculus and in partial function logic for many classes of
algorithms.
CS-TR-71-189
Report Number: CS-TR-71-190
Institution: Stanford University, Department of Computer Science
Title: An n log n algorithm for minimizing states in a finite automaton
Author: Hopcroft, John E.
Date: January 1971
Abstract:
An algorithm is given for minimizing the number of states in a finite
automaton or for determining if two finite automata are equivalent. The
asymptotic running time of the algorithm is bounded by k n log n where
k is some constant and n is the number of states. The constant k
depends linearly on the size of the input alphabet.
CS-TR-71-190
Report Number: CS-TR-71-191
Institution: Stanford University, Department of Computer Science
Title: An introduction to the direct emulation of control structures by a parallel micro-computer
Author: Lesser, Victor R.
Date: January 1971
Abstract:
This paper is an investigation of the organization of a parallel
micro-computer designed to emulate a wide variety of sequential and
parallel computers. This micro-computer allows tailoring of its control
structure so that it is appropriate for the particular computer to be
emulated. The control structure of this micro-computer is dynamically
modified by changing the organization of its data structure for
control. The micro-computer contains six primitive operators which
dynamically manipulate and generate a tree type data structure for
control. This data structure for control is used as a syntactic
framework within which particular implementations of control concepts,
such as iteration, recursion, co-routines, parallelism, interrupts,
etc., can be easily expressed. The major features of the control data
structure and the primitive operators are: (1) once the fixed control
and data linkages among processes have been defined, they need not be
rebuilt on subsequent executions of the control structure; (2)
micro-programs may be written so that they execute independently of the
number of physical processors present and still take advantage of
available processors; (3) control structures for I/O processes,
data-accessing processes, and computational processes are expressed in
a single uniform framework. An emulator programmed on this
micro-computer works as an iterative two-step process similar to the
process of dynamic compilation or run time macro-expansion. This
dynamic compilation approach to emulation differs considerably from the
conventional approach to emulation, and provides a unifying approach to
the emulation of a wide variety of sequential and parallel computers.
CS-TR-71-191
Report Number: CS-TR-71-192
Institution: Stanford University, Department of Computer Science
Title: An n log n algorithm for isomorphism of planar triply connected graphs
Author: Hopcroft, John E.
Date: January 1971
Abstract:
It is shown that the isomorphism problem for triply connected planar
graphs can be reduced to the problem of minimizing states in a finite
automaton. By making use of an n log n algorithm for minimizing the
number of states in a finite automaton, an algorithm for determining
whether two planar triply connected graphs are isomorphic is developed.
The asymptotic growth rate of the algorithm grows as n log n where n is
the number of vertices in the graph.
CS-TR-71-192
Report Number: CS-TR-71-193
Institution: Stanford University, Department of Computer Science
Title: Intention, memory, and computer understanding
Author: Schank, Roger C.
Date: January 1971
Abstract:
Procedures are described for discovering the intention of a speaker by
relating the Conceptual Dependency representation of the speaker's
utterance to the computer's world model such that simple implications
can be made. These procedures function at levels higher than that of
the sentence by allowing for predictions based on context and the
structure of the memory. Computer understanding of natural language is
shown to consist of the following parts: assigning a conceptual
representation to an input; relating that representation to the memory
such as to extract the intention of the speaker; and selecting the
correct response type triggered by such an utterance according to the
situation.
CS-TR-71-193
Report Number: CS-TR-71-195
Institution: Stanford University, Department of Computer Science
Title: The direct solution of the discrete Poisson equation on irregular regions
Author: Buzbee, B. L.
Author: Dorr, Fred W.
Author: George, John Alan
Author: Golub, Gene H.
Date: December 1970
Abstract:
There are several very fast direct methods which can be used to solve
the discrete Poisson equation on rectangular domains. We show that
these methods can also be used to treat problems on irregular regions.
CS-TR-71-195
Report Number: CS-TR-71-197
Institution: Stanford University, Department of Computer Science
Title: MIX/360 user's guide
Author: Knuth, Donald E.
Author: Sites, Richard L.
Date: March 1971
Abstract:
MIX/360 is an assembler and simulator for the hypothetical MIX machine,
which is described for example in Knuth's $\underline{The Art of
Computer Programming}$, Section 1.3.1. The system contains several
debugging aids to help program construction and verification.
CS-TR-71-197
Report Number: CS-TR-71-201
Institution: Stanford University, Department of Computer Science
Title: Planarity testing in V log V steps: extended abstract
Author: Hopcroft, John E.
Author: Tarjan, Robert Endre
Date: February 1971
Abstract:
An efficient algorithm is presented for determining whether or not a
given graph is planar. If V is the number of vertices in the graph, the
algorithm requires time proportional to V log V and space proportional
to V when run on a random-access computer. The algorithm constructs the
facial boundaries of a planar representation without backup, using
extensive list-processing features to speed computation. The
theoretical time bound improves on that of previously published
algorithms. Experimental evidence indicates that graphs with a few
thousand edges can be tested within seconds.
CS-TR-71-201
Report Number: CS-TR-71-202
Institution: Stanford University, Department of Computer Science
Title: Communicating semaphores
Author: Saal, Harry J.
Author: Riddle, William E.
Date: February 1971
Abstract:
This paper describes two extensions to the semaphore operators
originally introduced by Dijkstra. These extensions can be used to
reduce: 1) the number of semaphore references; 2) the time spent in
critical sections; and 3) the number of distinct semaphores required
for proper synchronization without greatly increasing the time required
for semaphore operations. Communicating semaphores may be utilized not
only for synchronization but also for message switching, resource
allocation from pools and as general queueing mechanisms.
CS-TR-71-202
Report Number: CS-TR-71-203
Institution: Stanford University, Department of Computer Science
Title: The Heuristic DENDRAL program for explaining empirical data
Author: Buchanan, Bruce G.
Author: Lederberg, Joshua
Date: February 1971
Abstract:
The Heuristic DENDRAL program uses an information processing model of
scientific reasoning to explain experimental data in organic chemistry.
This report summarizes the organization and results of the program for
computer scientists. The program is divided into three main parts:
planning, structure generation, and evaluation. The planning phase
infers constraints on the search space from the empirical data input to
the system. The structure generation phase searches a tree whose
termini are models of chemical molecules using pruning heuristics of
various kinds. The evaluation phase tests the candidate structures
against the original data. Results of the program's analyses of some
test data are discussed.
CS-TR-71-203
Report Number: CS-TR-71-204
Institution: Stanford University, Department of Computer Science
Title: FETE: a Fortran execution time estimator
Author: Ingalls, Daniel H. H.
Date: February 1971
Abstract:
If you want to live cheaply, you must make a list of how much money is
spent on each thing every day. This enumeration will quickly reveal the
principal areas of waste. The same method works for saving computer
time. Originally, one had to put his own timers and counters into a
program to determine the distribution of time spent in each part.
Recently several automated systems have appeared which either insert
counters automatically or interrupt the program during its execution to
produce the tallies. FETE is a system of the former type which has two
outstanding characteristics: it is very easy to implement and it is
very easy to use. By demonstrating such convenience, it should
establish execution timing as a standard tool in program development.
CS-TR-71-204
Report Number: CS-TR-71-205
Institution: Stanford University, Department of Computer Science
Title: An algebraic definition of simulation between programs
Author: Milner, Robin
Date: February 1971
Abstract:
A simulation relation between programs is defined which is
quasi-ordering. Mutual simulation is then an equivalence relation, and
by dividing out by it we abstract from a program such details as how
the sequencing is controlled and how data is represented. The
equivalence classes are approxiamtions to the algorithms which are
realized, or expressed, by their member programs. A technique is given
and illustrated for proving simulation and equivalence of programs;
there is an analogy with Floyd's technique for proving correctness of
programs. Finally, necessary and sufficient conditions for simulation
are given.
CS-TR-71-205
Report Number: CS-TR-71-207
Institution: Stanford University, Department of Computer Science
Title: Efficient algorithms for graph manipulation
Author: Hopcroft, John E.
Author: Tarjan, Robert Endre
Date: March 1971
Abstract:
Efficient algorithms are presented for partitioning a graph into
connected components, biconnected components and simple paths. The
algorithm for partitioning of a graph into simple paths is iterative
and each iteration produces a new path between two vertices already on
paths. (The start vertex can be specified dynamically.) If V is the
number of vertices and E is the number of edges each algorithm requires
time and space proportional to max(V,E) when executed on a random
access computer.
CS-TR-71-207
Report Number: CS-TR-71-209
Institution: Stanford University, Department of Computer Science
Title: Project technical report
Author: McCarthy, John
Author: Samuel, Arthur L.
Author: Feigenbaum, Edward A.
Author: Lederberg, Joshua
Date: March 1971
Abstract:
An overview is presented of current research at Stanford in artificial
intelligence and heuristic programming. This report is largely the text
of a proposal to the Advanced Research Projects Agency for fiscal years
1972-73.
CS-TR-71-209
Report Number: CS-TR-71-210
Institution: Stanford University, Department of Computer Science
Title: ACCESS: a program for the catalog and access of information
Author: Purdy, J. Gerry
Date: March 1971
Abstract:
ACCESS is a program for the catalog and access of information. The
program is primarily designed for and intended to handle a personal
library, although larger applications are possible. ACCESS produces a
listing of all entries by locator code (so one knows where to find the
entry in his library), a listing of entry titles by user-specified
category codes, and a keyword-in-context KWIC listing (each keyword
specified by the user). ACCESS is presently programmed in FORTRAN and
operates on any IBM System/360 under OS (it uses the IBM SORT/MERGE
package). It is anticipated a machine language version (soon to be
implemented) will greatly decrease the running time of the program.
CS-TR-71-210
Report Number: CS-TR-71-211
Institution: Stanford University, Department of Computer Science
Title: Algorithms to reveal properties of floating-point arithmetic
Author: Malcolm, Michael A.
Date: March 1971
Abstract:
Two algorithms are presented in the form of Fortran subroutines. Each
subroutine computes the radix and number of digits of the
floating-point numbers and whether rounding or chopping is done by the
machine on which it is run. The methods are shown to work on any
"reasonable" floating-point computer.
CS-TR-71-211
Report Number: CS-TR-71-212
Institution: Stanford University, Department of Computer Science
Title: Time and memory requirements for solving linear systems
Author: Morgana, Maria Aurora
Date: March 1971
Abstract:
The Computer Science Department program library contains a number of
ALGOL W procedures and FORTRAN subroutines which can be used to solve
systems of linear equations. This report describes the results of tests
to determine the amount of time and memory required to solve systems of
various orders.
CS-TR-71-212
Report Number: CS-TR-71-213
Institution: Stanford University, Department of Computer Science
Title: The switchyard problem: sorting using networks of queues and stacks
Author: Tarjan, Robert Endre
Date: April 1971
Abstract:
The problem of sorting a sequence of numbers using a network of queues
and stacks is presented. A characterization of sequences sortable using
parallel queues is given, and partial characterizations of sequences
sortable using parallel stacks and networks of queues are given.
CS-TR-71-213
Report Number: CS-TR-71-215
Institution: Stanford University, Department of Computer Science
Title: PL360 (revised): a programming language for the IBM 360
Author: Malcolm, Michael A.
Date: May 1972
Abstract:
In 1968, N. Wirth (Jan. JACM) published a formal description of PL360,
a programming language designed specifically for the IBM 360. PL360 has
an appearance similar to that of Algol, but it provides the facilities
of a symbolic machine language. Since 1968, numerous extensions and
modifications have been made to the PL360 compiler which was originally
designed and implemented by N. Wirth and J. Wells. Interface and
input-output subroutines have been written which allow the use of PL360
under OS, DOS, MTS and Orvyl. A formal description of PL360 as it is
presently implemented is given. The description of the language is
followed by sections on the use of PL360 under various operating
systems, namely OS, DOS and MTS. Instructions on how to use the PL360
compiler and PL360 programs in an interactive mode under the Orvyl
time-sharing monitor are also included.
CS-TR-71-215
Report Number: CS-TR-71-217
Institution: Stanford University, Department of Computer Science
Title: Decidable properties of monadic functional schemas
Author: Ashcroft, Edward A.
Author: Manna, Z ohar
Author: Pneuli, Amir
Date: July 1971
Abstract:
We define a class of (monadic) functional schemas which properly
includes 'Ianov' flowchart schemas. We show that the termination,
divergence and freedom problems for functional schemas are decidable.
Although it is possible to translate a large class of non-free
functional schemas into equivalent free functional schemas, we show
that this cannot be done in general. We show also that the equivalence
problem for free functional schemas is decidable. Most of the results
are obtained from well-known results in Formal Languages and Automata
Theory.
CS-TR-71-217
Report Number: CS-TR-71-221
Institution: Stanford University, Department of Computer Science
Title: A heuristic programming study of theory formation in science
Author: Buchanan, Bruce G.
Author: Feigenbaum, Edward A.
Author: Lederberg, Joshua
Date: July 1971
Abstract:
The Meta-DENDRAL program is a vehicle for studying problems of theory
formation in science. The general strategy of Meta-DENDRAL is to reason
from data to plausible generalizations and then to organize the
generalizations into a unified theory. Three main subproblems are
discussed: (1) explain the experimental data for each individual
chemical structure, (2) generalize the results from each structure to
all structures, and (3) organize the generalizations into a unified
theory. The program is built upon the concepts and programmed routines
already available in the Heuristic DENDRAL performance program, but
goes beyond the performance program in attempting to formulate the
theory which the performance program will use.
CS-TR-71-221
Report Number: CS-TR-71-224
Institution: Stanford University, Department of Computer Science
Title: Parallel programming
Author: Ershov, Andrei P.
Date: July 1971
Abstract: This report is based on lectures given at Stanford University by Dr. Ershov in November, 1970.
CS-TR-71-224
Report Number: CS-TR-71-225
Institution: Stanford University, Department of Computer Science
Title: Numerical methods for computing angles between linear subspaces
Author: Bjoerck, Ake
Author: Golub, Gene H.
Date: July 1971
Abstract:
Assume that two subspaces F and G of unitary space are defined as the
ranges (or nullspaces) of given rectangular matrices A and B. Accurate
numerical methods are developed for computing the principal angles
$\theta_k (F,G)$ and orthogonal sets of principal vectors $u_k\
\epsilon\ F$ and $v_k\ \epsilon\ G$, k = 1,2,..., q = dim(G) $\leq$
dim(F). An important application in statistics is computing the
canonical correlations $\sigma_k\ = cos \theta_k$ between two sets of
variates. A perturbation analysis shows that the condition number for
$\theta_k$ essentially is max($\kappa (A),\kappa (B)$), where $\kappa$
denotes the condition number of a matrix. The algorithms are based on a
preliminary QR-factorization of A and B (or $A^H$ and $B^H$), for which
either the method of Householder transformations (HT) or the modified
Gram-Schmidt method (MGS) is used. Then cos $\theta_k$ and sin
$\theta_k$ are computed as the singular values of certain related
matrices. Experimental results are given, which indicates that MGS
gives $\theta_k$ with equal precision and fewer arithmetic operations
than HT. However, HT gives principal vectors, which are orthogonal to
working accuracy, which is not in general true for MGS. Finally the
case when A and/or B are rank deficient is discussed.
CS-TR-71-225
Report Number: CS-TR-71-226
Institution: Stanford University, Department of Computer Science
Title: SIMPLE: a simple precedence translator writing system
Author: George, James E.
Date: July 1971
Abstract:
SIMPLE is a translator writing system composed of a simple precedence
syntax analyzer and a semantic constructor and is implemented in PL/I.
It provides an error diagnostic and recovery mechanism for any system
implemented using SIMPLE. The removal of precedence conflicts is
discussed in detail with several examples. The utilization of SIMPLE is
illustrated by defining a command language meta system for the
construction of scanners for a wide variety of command oriented
languages. This meta system is illustrated by defining commands from
several text editors.
CS-TR-71-226
Report Number: CS-TR-71-228
Institution: Stanford University, Department of Computer Science
Title: Function minimization and automatic therapeutic control
Author: Kaufman, Linda C.
Date: July 1971
Abstract: No abstract available.
CS-TR-71-228
Report Number: CS-TR-71-229
Institution: Stanford University, Department of Computer Science
Title: Variational study of nonlinear spline curves
Author: Lee, Erastus H.
Author: Forsythe, George E.
Date: August 1971
Abstract:
This is an exposition of the variational and differential properties of
nonlinear spline curves, based on the Euler-Bernoulli theory for the
bending of thin beams or elastica. For both open and closed splines
through prescribed nodal points in the euclidean plane, various types
of nodal constraints are considered, and the corresponding algebraic
and differential equations relating curvature, angle, arc length, and
tangential force are derived in a simple manner. The results for closed
splines are apparently new, and they cannot be derived by the
consideration of a constrained conservative system. There is a survey
of the scanty recent literature.
CS-TR-71-229
Report Number: CS-TR-71-230
Institution: Stanford University, Department of Computer Science
Title: ALGOL W reference manual
Author: Sites, Richard L.
Date: February 1972
Abstract:
"A Contribution to the Development of ALGOL" by Niklaus Wirth and C. A.
R. Hoare was the basis for a compiler developed for the IBM 360 at
Stanford University. This report is a description of the implemented
language, ALGOL W. Historical background and the goals of the language
may be found in the Wirth and Hoare paper. This manual refers to the
version of the Algol W compiler dated 16 January 1972.
CS-TR-71-230
Report Number: CS-TR-71-234
Institution: Stanford University, Department of Computer Science
Title: Some modified eigenvalue problems
Author: Golub, Gene H.
Date: August 1971
Abstract:
We consider the numerical calculation of several eigenvalue problems
which require some manipulation before the standard algorithms may be
used. This includes finding the stationary values of a quadratic form
subject to linear constraints and determining the eigenvalues of a
matrix which is modified by a matrix of rank one. We also consider
several inverse eigenvalue problems. This includes the problem of
computing the Gauss-Radau and Gauss-Lobatto quadrature rules. In
addition, we study several eigenvalue problems which arise in least
squares.
CS-TR-71-234
Report Number: CS-TR-71-236
Institution: Stanford University, Department of Computer Science
Title: Numerical computations for univariate linear models
Author: Golub, Gene H.
Author: Styan, George P. H.
Date: September 1971
Abstract:
We consider the usual univariate linear model E($\underset ~\to y$) =
$\underset ~\to X \underset ~\to \gamma$ , V ($\underset ~\to y$) =
$\sigma^2 \underset ~\to I$. In Part One of this paper $\underset ~\to
X$ has full column rank. Numerically stable and efficient computational
procedures are developed for the least squares estimation of $\underset
~\to \gamma$ and the error sum of squares. We employ an orthogonal
triangular decomposition of $\underset ~\to X$ using Householder
transformations. A lower bound for the condition number of $\underset
~\to X$ is immediately obtained from this decomposition. Similar
computational procedures are presented for the usual F-test of the
general linear hypothesis $\underset ~\to L\ ' \underset ~\to \gamma$ =
$\underset ~\to 0$ ; $\underset ~\to L\ ' \underset ~\to \gamma$ =
$\underset ~\to m$ is also considered for $\underset ~\to m\ \neq\ 0$.
Updating techniques are given for adding to or removing from
($\underset ~\to X ,\underset ~\to y$) a row, a set of rows or a column
. In Part Two, $\underset ~\to X$ has less than full rank. Least
squares estimates are obtained using generalized inverses. The function
$\underset ~\to L '\underset ~\to \gamma$ is estimable whenever it
admits an unbiased estimator linear in $\underset ~\to y$. We show how
to computationally verify estimability of $\underset ~\to L '\underset
~\to \gamma$ and the equivalent testability of $\underset ~\to L
'\underset ~\to \gamma\ = \underset ~\to 0$.
CS-TR-71-236
Report Number: CS-TR-71-237
Institution: Stanford University, Department of Computer Science
Title: A generalization of the divide-sort-merge strategy for sorting networks
Author: Van Voorhis, David C.
Date: August 1971
Abstract:
With a few notable exceptions the best sorting networks known have
employed a "divide-sort-merge" strategy. That is, the N inputs are
divided into 2 groups - - normally of size $\lceil \frac{1}{2} N\rceil$
and $\lfloor \frac{1}{2} N\rfloor$ [Here $\lceil x\rceil$ denotes the
smallest integer greater than or equal to x, whereas $\lfloor x\rfloor$
denotes the largest integer less than or equal to x] - - that are
sorted independently and then "merged" together to form a single sorted
sequence. An N-sorter network that uses this strategy consists of 2
smaller sorting networks followed by a merge network. The best merge
networks known are also constructed recursively, using 2 smaller merge
networks followed by a simple arrangement of $\lceil \frac{1}{2}
N\rceil$ - 1 comparators. We consider a generalization of the
divide-sort-merge strategy in which the N inputs are divided into g
$\geq$ 2 disjoint groups that are sorted independently and then merged
together. The merge network that combines these g sorted groups uses d
$\geq$ 2 smaller merge networks as an initial subnetwork. The two
parameters g and d together define what we call a "[g,d]" strategy. A
[g,d] N-sorter network consists of g smaller sorting networks followed
by a [g,d] merge network. The initial portion of the [g,d] merge
network consists of d smaller merge networks; the final portion, which
we call the "f-network," includes whatever additional comparators are
required to complete the merge. When g = d = 2, the f-network is a
simple arrangement of $\lceil \frac{1}{2} N\rceil$ - 1 comparators;
however, for larger g,d the structure of the [g,d] f-network becomes
increasingly complicated. In this paper we describe how to construct
[g,d] f-networks for arbitrary g,d. For N > 8 the resulting [g,d]
N-sorter networks are more economical than any previous networks that
use the divide-sort-merge strategy; for N > 34 the resulting
networks are more economical than previous networks of any
construction. The [4,4] N-sorter network described in this paper
requires $\frac{1}{4} N{(log_2 N)}^2\ - \frac{1}{3} N(log_2 N) + O(N)$
comparators, which represents an asymptotic improvement of
$\frac{1}{12} N(log_2 N)$ comparators over the best previous N-sorter.
We indicate that special constructions (not described in this paper)
have been found for [$2^r , 2^r$] f-networks, which lead to an N-sorter
network that requires only .25 $N{(log_2 N)}^2\ - .372 N(log_2 N) +
O(N)$ comparators.
CS-TR-71-237
Report Number: CS-TR-71-238
Institution: Stanford University, Department of Computer Science
Title: A lower bound for sorting networks that use the divide-sort-merge strategy
Author: Van Voorhis, David C.
Date: August 1971
Abstract:
Let $M_g (g^{k+1})$ represent the minimum number of comparators
required by a network that merges g sorted multisets containing $g^k$
members each. In this paper we prove that $M_g (g^{k+1}) \geq\ g
M_g(g^k) + g^{k-1} \sum_{\ell =2}^{g} \lfloor (\ell -1)g/\ell\rfloor$.
From this relation we are able to show that an N-sorter network which
uses the g-way divide-sort-merge strategy must contain at least order
$N{(log_2 N)}^2$ comparators.
CS-TR-71-238
Report Number: CS-TR-71-239
Institution: Stanford University, Department of Computer Science
Title: Large [g,d] sorting networks
Author: Van Voorhis, David C.
Date: August 1971
Abstract:
With only a few exceptions the minimum-comparator N-sorter networks
employ the generalized "divide-sort-merge" strategy. That is, the N
inputs are divided among g $\geq$ 2 smaller sorting networks -- of size
$N_1,N_2,...,N_g$, where $N = \sum_{k=1}^{g} N_k$ -- that comprise the
initial portion of the N-sorter network. The remainder of the N-sorter
is a comparator network that merges the outputs of the $N_1-, N_2-,
...,$ and $N_g$-sorter networks into a single sorted sequence. The most
economical merge networks yet designed, known as the "[g,d]" merge
networks, consist of d smaller merge networks -- where d is a common
divisor of $N_1,N_2,...,N_g$ -- followed by a special comparator
network labeled a "[g,d] f-network." In this paper we describe special
constructions for $[2^r,2^r]$ f-networks, r > 1, which enable us to
reduce the number of comparators required by a large N-sorter network
from $.25N {log_2 N)}^2 - .25N(log_2 N) + O(N) to .25N{(log_2 N)}^2 -
.37N(log_2 N) + O(N)$.
CS-TR-71-239
Report Number: CS-TR-71-240
Institution: Stanford University, Department of Computer Science
Title: Correctness of two compilers for a Lisp subset
Author: London, Ralph L.
Date: October 1971
Abstract:
Using mainly structural induction, proofs of correctness of each of two
running Lisp compilers for the PDP-10 computer are given. Included are
the rationale for presenting these proofs, a discussion of the proofs,
and the changes needed to the second compiler to complete its proof.
CS-TR-71-240
Report Number: CS-TR-71-242
Institution: Stanford University, Department of Computer Science
Title: The frame problem and related problems in artificial intelligence
Author: Hayes, Patrick J.
Date: November 1971
Abstract:
The frame problem arises in considering the logical structure of a
robot's beliefs. It has been known for some years, but only recently
has much progress been made. The problem is described and discussed.
Various suggested methods for its solution are outlined, and described
in a uniform notation. Finally, brief consideration is given to the
problem of adjusting a belief system in the face of evidence which
contradicts beliefs. It is shown that a variation on the situation
notation of (McCarthy and Hayes, 1969) permits an elegant approach, and
relates this problem to the frame problem.
CS-TR-71-242
Report Number: CS-TR-71-246
Institution: Stanford University, Department of Computer Science
Title: A resemblance test for the validation of a computer simulation of paranoid processes
Author: Colby, Kenneth Mark
Author: Hilf, Franklin Dennis
Author: Weber, Sylvia
Author: Kraemer, Helena C.
Date: November 1971
Abstract:
A computer simulation of paranoid processes in the form of a dialogue
algorithm was subjected to a validation study using an experimental
resemblance test in which judges rated degrees of paranoia present in
initial psychiatric interviews of both paranoid patients and of
versions of the paranoid model. The statistical results indicate a
satisfactory degree of resemblance between the two groups of
interviews. It is concluded that the model provides a successful
simulation of naturally occuring paranoid processes.
CS-TR-71-246
Report Number: CS-TR-71-247
Institution: Stanford University, Department of Computer Science
Title: One small head -- some remarks on the use of 'model' in linguistics
Author: Wilks, Yorick A.
Date: December 1971
Abstract:
I argue that the present situation in formal linguistics, where much
new work is presented as being a "model of the brain", or of "human
language behavior", is an undesirable one. My reason for this judgement
is not the conservative (Braithwaitian) one that the entities in
question are not really models but theories. It is rather that they are
called models because they cannot be theories of the brain at the
present stage of brain research, and hence that the use of "model" in
this context is not so much aspirational as resigned about our total
ignorance of how the brain stores and processes linguistic information.
The reason such explanatory entities cannot be theories is that this
ignorance precludes any "semantic ascent" up the theory; i.e.,
interpreting the items of the theory in terms of observables. And the
brain items, whatever they may be, are not, as Chomsky has sometimes
claimed, in the same position as the "occult entities" of Physics like
Gravitation; for the brain items are not theoretically unreachable,
merely unreached. I then examine two possible alternate views of what
linguistic theories should be proffered as theories of: theories of
sets of sentences, and theories of a particular class of algorithms. I
argue for a form of the latter view, and that its acceptance would also
have the effect of making Computational Linguistics a central part of
Linguistics, rather than the poor relation it is now. I examine a
distinction among "linguistic models" proposed recently by Mey, who was
also arguing for the self-sufficiency of Computational Linguistics,
though as a "theory of performance". I argue that his distinction is a
bad one, partly for the reasons developed above and partly because he
attempts to tie it to Chomsky's inscrutable competence-performance
distinction. I conclude that the independence and self-sufficiency of
Computational Linguistics are better supported by the arguments of this
paper.
CS-TR-71-247
Report Number: CS-TR-71-249
Institution: Stanford University, Department of Computer Science
Title: An annotated bibliography on the construction of compilers
Author: Pollack, Bary W.
Date: December 1971
Abstract:
This bibliography is divided into 9 sections: 1. General Information on
Compiling Techniques 2. Syntax- and Base-Directed Parsing 3. Parsing in
General 4. Resource Allocation 5. Errors - Detection and Correction 6.
Compiler Implementation in General 7. Details of Compiler Construction
8. Additional Topics 9. Miscellaneous Related References Within each
section the entries are alphabetical by author. Keywords describing the
entry will be found for each entry set off by pound signs (#). Some
amount of cross-referencing has been done; e.g., entries which fall
into Section 3 as well as Section 7 will generally be found in both
sections. However, entries will be found listed only under the
principle or first author's name. "Computing Reviews" citations are
given following the annotation when available.
CS-TR-71-249
Report Number: CS-TR-71-250
Institution: Stanford University, Department of Computer Science
Title: Program schemas with equality
Author: Chandra, Ashok K.
Author: Manna, Z ohar
Date: December 1971
Abstract:
We discuss the class of program schemas augmented with equality tests,
that is, tests of equality between terms. In the first part of the
paper we discuss and illustrate the "power" of equality tests. It turns
out that the class of program schemas with equality is more powerful
than the "maximal" classes of schemas suggested by other investigators.
In the second part of the paper we discuss the decision problems of
program schemas with equality. It is shown for example that while the
decision problems normally considered for schemas (such as halting,
divergence, equivalence, isomorphism and freedom) are solvable for
Ianov schemas, they all become unsolvable if general equality tests are
added. We suggest, however, limited equality tests which can be added
to certain subclasses of program schemas while preserving their
solvable properties.
CS-TR-71-250
Report Number: CS-TR-72-252
Institution: Stanford University, Department of Computer Science
Title: Large-scale linear programming using the Cholesky factorization.
Author: Saunders, Michael A.
Date: January 1972
Abstract:
A variation of the revised simplex method is proposed for solving the
standard linear programming problem. The method is derived from an
algorithm recently proposed by Gill and Murray, and is based upon the
orthogonal factorization B = LQ or, equivalently, upon the Cholesky
factorization ${BB}^T = {LL}^T$ where B is the usual square basis, L is
lower triangular and Q is orthogonal. We wish to retain the favorable
numerical properties of the orthogonal factorization, while extending
the work of Gill and Murray to the case of linear programs which are
both large and sparse. The principal property exploited is that the
Cholesky factor L depends only on $underline{which}$ variables are in
the basis, and not upon the $underline{order}$ in which they happen to
enter. A preliminary ordering of the rows of the full data matrix
therefore promises to ensure that L will remain sparse throughout the
iterations of the simplex method. An initial (in-core) version of the
algorithm has been implemented in Algol W on the IBM 360/91 and tested
on several medium-scale problems from industry (up to 930 constraints).
While performance has not been especially good on problems of high
density, the method does appear to be efficient on problems which are
very sparse, and on structured problems which have either generalized
upper bounding, block-angular, or staircase form.
CS-TR-72-252
Report Number: CS-TR-72-253
Institution: Stanford University, Department of Computer Science
Title: Total complexity and the inference of best programs.
Author: Feldman, Jerome A.
Author: Shields, Paul C.
Date: April 1972
Abstract:
Axioms for a total complexity measure for abstract programs are
presented. Essentially, they require that total complexity be an
unbounded increasing function of the Blum time and size measures.
Algorithms for finding the best program on a finite domain are
presented, and their limiting behaviour for infinite domains described.
For total complexity, there are important senses in which a machine
$underline{can}$ find the best program for a large class of functions.
CS-TR-72-253
Report Number: CS-TR-72-254
Institution: Stanford University, Department of Computer Science
Title: Von Neumann's comparison method for random sampling from the normal and other distributions.
Author: Forsythe, George E.
Date: January 1972
Abstract:
The author presents a generalization he worked out in 1950 of von
Neumann's method of generating random samples from the exponential
distribution by comparisons of uniform random numbers on (0,1). It is
shown how to generate samples from any distribution whose probability
density function is piecewise both absolutely continuous and monotonic
on ($-\infty$,$\infty$). A special case delivers normal deviates at an
average cost of only 4.036 uniform deviates each. This seems more
efficient than the Center-Tail method of Dieter and Ahrens, which uses
a related, but different, method of generalizing the von Neumann idea
to the normal distribution.
CS-TR-72-254
Report Number: CS-TR-72-255
Institution: Stanford University, Department of Computer Science
Title: Automatic programming.
Author: Feldman, Jerome A.
Date: February 1972
Abstract:
The revival of interest in Automatic Programming is considered. The
research is divided into direct efforts and theoretical developments
and the successes and prospects of each are described.
CS-TR-72-255
Report Number: CS-TR-72-256
Institution: Stanford University, Department of Computer Science
Title: Edmonds polyhedra and weakly hamiltonian graphs.
Author: Chvatal, Vaclav
Date: January 1972
Abstract:
Jack Edmonds developed a new way of looking at extremal combinatorial
problems and applied his technique with a great success to the problems
of the maximal-weight degree-constrained subgraphs. Professor C. St. J.
A. Nash-Williams suggested to use Edmonds' approach in the context of
hamiltonian graphs. In the present paper, we determine a new set of
inequalities (the "comb inequalities") which are satisfied by the
characteristic functions of hamiltonian circuits but are not explicit
in the straightforward integer programming formulation. A direct
application of the linear programming duality theorem then leads to a
new necessary condition for the existence of hamiltonian circuits; this
condition appears to be stronger than the previously known ones.
Relating linear programming to hamiltonian circuits, the present paper
can also be seen as a continuation of the work of Dantzig, Fulkerson
and Johnson on the travelling salesman problem.
CS-TR-72-256
Report Number: CS-TR-72-257
Institution: Stanford University, Department of Computer Science
Title: On "PASCAL," code generation, and the CDC 6000 computer.
Author: Wirth, Niklaus
Date: February 1972
Abstract:
"PASCAL" is a general purpose programming language with characteristics
similar to ALGOL 60, but with an enriched set of program- and data
structuring facilities. It has been implemented on the CDC 6000
computer. This paper discusses selected topics of code generation, in
particular the selection of instruction sequences to represent simple
operations on arithmetic, Boolean, and powerset operands. Methods to
implement recursive procedures are briefly described, and it is hinted
that the more sophisticated solutions are not necessarily also the
best. The CDC 6000 architecture appears as a frequent source of
pitfalls and nuisances, and its main trouble spots are scrutinized and
discussed.
CS-TR-72-257
Report Number: CS-TR-72-258
Institution: Stanford University, Department of Computer Science
Title: Some basic machine algorithms for integral order computations.
Author: Brown, Harold
Date: February 1972
Abstract:
Three machine implemented algorithms for computing with integral orders
are described. The algorithms are: 1. For an integral order R given in
terms of its left regular representation relative to any basis, compute
the nil radical J(R) and a left regular representation of R/J(R). 2.
For a semisimple order R given in terms of its left regular
representation relative to any basis, compute a new basis for R and the
associated left regular representation of R such that the first basis
element of the transformed basis is an integral multiple of the
identity element in Q $\bigotimes$ R. 3. Relative to any fixed Z -basis
for R, compute a unique canonical form for any given finitely generated
Z -submodule of Q $\bigotimes$ R described in terms of that basis.
CS-TR-72-258
Report Number: CS-TR-72-261
Institution: Stanford University, Department of Computer Science
Title: The differentiation of pseudoinverses and nonlinear least squares problems whose variables separate.
Author: Golub, Gene H.
Author: Pereyra, Victor
Date: February 1972
Abstract:
For given data ($t_i\ , y_i), i=1, \ldots ,m$ , we consider the least
squares fit of nonlinear models of the form F($\underset ~\to a\ ,
\underset ~\to \alpha\ ; t) = \sum_{j=1}^{n}\ g_j (\underset ~\to a )
\varphi_j (\underset ~\to \alpha\ ; t) , \underset ~\to a\ \epsilon
R^s\ , \underset ~\to \alpha\ \epsilon R^k\ $. For this purpose we
study the minimization of the nonlinear functional r($\underset ~\to a\
, \underset ~\to \alpha ) = \sum_{i=1}^{m} {(y_i - F(\underset ~\to a ,
\underset ~\to \alpha , t_i))}^2$. It is shown that by defining the
matrix ${ \{\Phi (\underset ~\to \alpha\} }_{i,j} = \varphi_j
(\underset ~\to \alpha ; t_i)$ , and the modified functional
$r_2(\underset ~\to \alpha ) = \l\ \underset ~\to y\ - \Phi (\underset
~\to \alpha )\Phi^+(\underset ~\to \alpha ) \underset ~\to y \l_2^2$,
it is possible to optimize first with respect to the parameters
$\underset ~\to \alpha$ , and then to obtain, a posteriori, the optimal
parameters $\overset ^\to {\underset ~\to a}$. The matrix
$\Phi^+(\underset ~\to \alpha$) is the Moore-Penrose generalized
inverse of $\Phi (\underset ~\to \alpha$), and we develop formulas for
its Frechet derivative under the hypothesis that $\Phi (\underset ~\to
\alpha$) is of constant (though not necessarily full) rank. From these
formulas we readily obtain the derivatives of the orthogonal projectors
associated with $\Phi (\underset ~\to \alpha$), and also that of the
functional $r_2(\underset ~\to \alpha$). Detailed algorithms are
presented which make extensive use of well-known reliable linear least
squares techniques, and numerical results and comparisons are given.
These results are generalizations of those of H. D. Scolnik [1971].
CS-TR-72-261
Report Number: CS-TR-72-263
Institution: Stanford University, Department of Computer Science
Title: A procedure for improving the upper bound for the number of n-ominoes.
Author: Klarner, David A.
Author: Rivest, Ronald L.
Date: February 1972
Abstract:
An n-omino is a plane figure composed of n unit squares joined together
along their edges. Every n-omino is generated by joining the edge of a
unit square to the edge of a unit square in some (n-1)-omino so that
the new square does not overlap any squares. Let t(n) denote the number
of n-ominoes, then it is known that the sequence ${((t(n))}^{1/n} : n =
1,2,\ldots )$ increases to a limit $\Theta$ , and 3.72 < $\Theta$
< 6.75 . A procedure exists for computing an increasing sequence of
numbers bounded above by $\Theta$. (Chandra recently showed that the
limit of this sequence is $\Theta$ .) In the present work we give a
procedure for computing a sequence of numbers bounded below by $\Theta$
. Whether or not the limit of this sequence is $\Theta$ remains an open
question. By computing the first ten terms of our sequence, we have
shown that $\Theta$ < 4.65 .
CS-TR-72-263
Report Number: CS-TR-72-264
Institution: Stanford University, Department of Computer Science
Title: An artificial intelligence approach to machine translation.
Author: Wilks, Yorick A.
Date: February 1972
Abstract:
The paper describes a system of semantic analysis and generation,
programmed in LISP 1.5 and designed to pass from paragraph length input
in English to French via an interlingual representation. A wide class
of English input forms will be covered, but the vocabulary will
initially be restricted to one of a few hundred words. With this subset
working, and during the current year (71-72), it is also hoped to map
the interlingual representation onto some predicate calculus notation
so as to make possible the answering of very simple questions about the
translated matter. The specification of the translation system itself
is complete, and its main points of interest that distinguish it from
other systems are: i) It translated phrase by phrase -- with facilities
for reordering phrases and establishing essential semantic
connectivities between them -- by mapping complex semantic structures
of "message" onto each phrase. These constitute the interlingual
representation to be translated. This matching is done without the
explicit use of a conventional syntax analysis, by taking as the
appropriate matched structure the "most dense" of the alternative
structures derived. This method has been found highly successful in
earlier versions of this analysis system. ii) The French output strings
are generated without the explicit use of a generative grammar. That is
done by means of STEREOTYPES: strings of French words, and functions
evaluating to French words, which are attached to English word senses
in the dictionary and built into the interlingual representation by the
analysis routines. The generation program thus receives an interlingual
representation that already contains both French output and implicit
procedures for assembling the output, since the stereotypes are in
effect recursive procedures specifying the content and production of
the ouput word strings. Thus the generation program at no time consults
a word dictionary or inventory of grammar rules. It is claimed that the
system of notation and translation described is a convenient one for
expressing and handling the items of semantic information that are
ESSENTIAL to any effective MT system, I discuss in some detail the
semantic information needed to ensure the correct choice of output
prepositions in French, a vital matter inadequately treated by
virtually all previous formalisms and projects.
CS-TR-72-264
Report Number: CS-TR-72-265
Institution: Stanford University, Department of Computer Science
Title: Primitive concepts underlying verbs of thought.
Author: Schank, Roger C.
Author: Goldman, Neil M.
Author: Rieger, Charles J.
Author: Riesbeck, Christopher K.
Date: February 1972
Abstract:
In order to create conceptual structures that will uniquely and
unambiguously represent the meaning of an utterance, it is necessary to
establish 'primitive' underlying actions and states into which verbs
can be mapped. This paper presents analyses of the most common mental
verbs in terms of such primitive actions and states. In order to
represent the way people speak about their mental processes, it was
necessary to add to the usual ideas of memory structure the notion of
Immediate Memory. It is then argued that there are only three primitive
mental ACTs.
CS-TR-72-265
Report Number: CS-TR-72-267
Institution: Stanford University, Department of Computer Science
Title: Mathematical Programming Language: an appraisal based on practical experiments.
Author: Bonzon, Pierre E.
Date: March 1972
Abstract:
The newly proposed Mathematical Programming Language is approached from
the user's point of view. To demonstrate its facility of use, three
programs are presented which solve large scale linear programming
problems with the generalized upper-bounding structure.
CS-TR-72-267
Report Number: CS-TR-72-268
Institution: Stanford University, Department of Computer Science
Title: Degrees and matchings.
Author: Chvatal, Vaclav
Date: March 1972
Abstract:
Let n, b, d be positive integers. D. Hanson proposed to evaluate
f(n,b,d), the largest possible number of edges in a graph with n
vertices having no vertex of degree greater than d and no set of more
than b independent edges. Using the alternating path method, he found
partial results in this direction. We complete Hanson's work; our proof
technique has a linear programming flavor and uses Berge's matching
formula.
CS-TR-72-268
Report Number: CS-TR-72-269
Institution: Stanford University, Department of Computer Science
Title: Arithmetic properties of certain recursively defined sets.
Author: Klarner, David A.
Author: Rado, Richard
Date: March 1972
Abstract:
Let R denote a set of linear operations defined on the set P of
positive integers; for example, a typical element of R has the form
$\rho (x_1, \ldots ,x_r) = m_0 + m_1 x_1 + \ldots + m_r x_r where m_0,
\ldots ,m_r$ denote certain integers. Given a set A of positive
integers, there is a smallest set of positive integers denoted which contains A as a subset and is closed under every operation in R. The set
can be constructed recursively as follows: Let $A_0$ = A, and define
$A_{k+1} = A_k \cup \{\rho (\bar{a}): \rho \in R,\bar{a}\in A_k \times
\ldots \times A_k\}$ (k = 0,1,\ldots ), then it can be shown that = $A_0 \cup A_1 \cup \ldots $. The sets
sometimes have an elegant form, for example, the set <2x + 3y: 1>
consists of all positive numbers congruent to 1 or 5 modulo 12. The
objective is to give an arithmetic characterization of elements of a
set , and this paper is a report on progress made on this problem
last year. Many of the questions left open here have since been
resolved by one of us (Klarner).
CS-TR-72-269
Report Number: CS-TR-72-270
Institution: Stanford University, Department of Computer Science
Title: The Lanczos algorithm for the symmetric Ax = $\lambda$Bx problem.
Author: Golub, Gene H.
Author: Underwood, Richard R.
Author: Wilkinson, James H.
Date: March 1972
Abstract:
The problem of computing the eigensystem of Ax = $\lambda$Bx when A and
B are symmetric and B is positive definite is considered. A
generalization of the Lanczos algorithm for reducing the problem to a
symmetric tridiagonal eigenproblem is given. A numerically stable
variant of the algorithm is described. The new algorithm depends
heavily upon the computation of elementary Hermitian matrices. An ALGOL
W procedure and a numerical example are also given.
CS-TR-72-270
Report Number: CS-TR-72-272
Institution: Stanford University, Department of Computer Science
Title: Fixpoint approach to the theory of computation.
Author: Manna, Z ohar
Author: Vuillemin, Jean
Date: March 1972
Abstract:
Following the fixpoint theory of Scott, we propose to define the
semantics of computer programs in terms of the least fixpoints of
recursive programs. This allows one not only to justify all existing
verification techniques, but also to extend them to handle various
properties of computer programs, including correctness, termination and
equivalence, in a uniform manner.
CS-TR-72-272
Report Number: CS-TR-72-273
Institution: Stanford University, Department of Computer Science
Title: Chromatic automorphisms of graphs.
Author: Chvatal, Vaclav
Author: Sichler, Jiri
Date: March 1972
Abstract:
The coloring group and the full automorphism group of an n-chromatic
graph are independent if and only if n is an integer $\geq$ 3.
CS-TR-72-273
Report Number: CS-TR-72-274
Institution: Stanford University, Department of Computer Science
Title: Linear combinations of sets of consecutive integers.
Author: Klarner, David A.
Author: Rado, Richard
Date: March 1972
Abstract:
Let k-1,$m_1, \ldots ,m_k$ denote non-negative integers, and suppose
the greatest common divisor of $m_1, \ldots ,m_k$ is 1. We show that if
$S_1, \ldots ,S_k$ are sufficiently long blocks of consecutive
integers, then the set $m_1 S_1 + \ldots\ m_k S_k$ contains a sizable
block of consecutive integers. For example, if m and n are relatively
prime natural numbers, and u, U, v, V are integers with U-u $\geq$ n-1,
V-v $geq$ m-1, then the set m{u,u+1, $\ldots$,U} + n{v,v+1, $\ldots$,V}
contains the set {mu + nv - $\sigma$(m,n), $\ldots$,mU + nV -
$\sigma$(m,n)} where $\sigma${m,n) = (m-1)(n-1) is the largest number
such that $\sigma$(m,n)-1 cannot be expressed in the form mx + ny with
x and y non-negative integers.
CS-TR-72-274
Report Number: CS-TR-72-275
Institution: Stanford University, Department of Computer Science
Title: Sets generated by iteration of a linear operation.
Author: Klarner, David A.
Date: March 1972
Abstract:
This note is a continuation of the paper "Arithmetic properties of
certain recursively defined sets," written in collaboration with
Richard Rado. Here the sets under consideration are those having the
form S = <$m_1 x_1 + \ldots\ + m_r x_r : 1$> where $m_1,\ldots
,m_r$ are given natural numbers with greatest common divisor 1. The set
S is the smallest set of natural numbers which contains 1 and is closed
under the operation $m_1 x_1 + \ldots\ + m_r x_r$. Also, S can be
constructed by iterating the operation $m_1 x_1 + \ldots\ + m_r X_r$
over the set {1}. For example, <2x + 3y: 1> = {1, 5, 13, 17, 25,
$\ldots\ } = (1 + 12N) $\cup$ (5 + 12N) where N = {0,1,2,$\ldots$}. It
is shown in this note that S contains an infinite arithmetic
progression for all natural numbers r-1,$m_1,\ldots ,m_r$. Furthermore,
if ($m_1,\ldots ,m_r$) = ($m_1\ldots m_r,m_1 + \ldots\ + m_r$) = 1,
then S is a per-set; that is, S is a finite union of infinite
arithmetic progressions. In particular, this implies
is a per-set for all pairs {m,n} of relatively prime natural numbers.
It is an open question whether S is a per-set when ($m_1,\ldots ,m_r$)
= 1, but ($m_1\ldots m_r,m_1 + \ldots\ + m_r$) > 1.
CS-TR-72-275
Report Number: CS-TR-72-278
Institution: Stanford University, Department of Computer Science
Title: Use of fast direct methods for the efficient numerical solution of nonseparable elliptic equations.
Author: Concus, Paul
Author: Golub, Gene H.
Date: April 1972
Abstract:
We study an iterative technique for the numerical solution of strongly
elliptic equations of divergence form in two dimensions with Dirichlet
boundary conditions on a rectangle. The technique is based on the
repeated solution by a fast direct method of a discrete Helmholtz
equation on a uniform rectangular mesh. The problem is suitably scaled
before iteration, and Chebyshev acceleration is applied to improve
convergence. We show that convergence can be exceedingly rapid and
independent of mesh size for smooth coefficients. Extensions to other
boundary conditions, other equations, and irregular mesh spacings are
discussed, and the performance of the technique is illustrated with
numerical examples.
CS-TR-72-278
Report Number: CS-TR-72-279
Institution: Stanford University, Department of Computer Science
Title: Topics in optimization.
Author: Osborne, Michael R.
Date: April 1972
Abstract:
These notes are based on a course of lectures given at Stanford, and
cover three major topics relevant to optimization theory. First an
introduction is given to those results in mathematical programming
which appear to be most important for the development and analysis of
practical algorithms. Next unconstrained optimization problems are
considered. The main emphasis is on that subclass of descent methods
which (a) requires the evaluation of first derivatives of the objective
function, and (b) has a family connection with the conjugate direction
methods. Numerical results obtained using a program based on this
material are discussed in an Appendix. In the third section, penalty
and barrier function methods for mathematical programming problems are
studied in some detail, and possible methods for accelerating their
convergence indicated.
CS-TR-72-279
Report Number: CS-TR-72-281
Institution: Stanford University, Department of Computer Science
Title: Computer interactive picture processing.
Author: Quam, Lynn H.
Author: Liebes, Sidney, Jr.
Author: Tucker, Robert B.
Author: Hannah, Marsha Jo
Author: Eross, Botond G.
Date: April 1972
Abstract:
This report describes work done in image processing using an
interactive computer system. Techniques for image differencing are
described and examples using images returned from Mars by the Mariner
Nine spacecraft are shown. Also described are techniques for stereo
image processing. Stereo processing for both conventional camera
systems and the Viking 1975 Lander camera system is reviewed.
CS-TR-72-281
Report Number: CS-TR-72-282
Institution: Stanford University, Department of Computer Science
Title: Efficient compilation of linear recursive programs.
Author: Chandra, Ashok K.
Date: April 1972
Abstract:
We consider the class of linear recursive programs. A linear recursive
program is a set of procedures where each procedure can make at most
one recursive call. The conventional stack implementation of recursion
requires time and space both proportional to n, the depth of recursion.
It is shown that in order to implement linear recursion so as to
execute in time n one doesn't need space proportional to n:
$n^\epsilon$ for arbitrarily small $\epsilon$ will do. It is also known
that with constant space one can implement linear recursion in time
$n^2$. We show that one can do much better: $n^{1+\epsilon}$ for
arbitrarily small $\epsilon$. We also describe an algorithm that lies
between these two: it takes time n.log(n) and space log(n). It is shown
that several problems are closely related to the linear recursion
problem, for example, the problem of reversing an input tape given a
finite automaton with several one-way heads. By casting all these
problems into a canonical form, efficient solutions are obtained
simultaneously for all.
CS-TR-72-282
Report Number: CS-TR-72-284
Institution: Stanford University, Department of Computer Science
Title: Edmonds polyhedra and a hierarchy of combinatorial problems.
Author: Chvatal, Vaclav
Date: May 1972
Abstract:
Let S be a set of linear inequalities that determine a bounded
polyhedron P. The closure of S is the smallest set of inequalities that
contains S and is closed under two operations: (i) taking linear
combinations of inequalities, (ii) replacing an inequality $\sum\ a_j
x_j \leq\ a_0$, where $a_1, a_2, ... , a_n$ are integers, by the
inequality $\sum\ a_j x_j \leq\ a$ with $a \geq\ [a_0]$. Obviously, if
integers $x_1, x_2, ... , x_n$ satisfy all the inequalities in S then
they satisfy also all the inequalities in the closure of S. Conversely,
let $\sum\ c_j x_j \leq\ c_0$ hold for all choices of integers $x_1,
x_2, ... , x_n$, that satisfy all the inequalities in S. Then we prove
that $\sum\ c_j x_j \leq\ c_0$ belongs to the closure of S. To each
integer linear programming problem, we assign a nonnegative integer,
called its rank. (The rank is the minimum number of iterations of the
operation (ii) that are required in order to eliminate the integrality
constraint.) We prove that there is no upper bound on the rank of
problems arising from the search for largest independent sets in graphs.
CS-TR-72-284
Report Number: CS-TR-72-286
Institution: Stanford University, Department of Computer Science
Title:
On the solution of Moser's problem in four dimensions, and related
issues. A collection of two papers: On the solution of Moser's problem
in four dimensions and Independent permutations as related to a problem
of Moser and a theorem of Polya.
Author: Chandra, Ashok K.
Date: May 1972
Abstract:
The problem of finding the largest set of nodes in a d-cube of side 3
such that no three nodes are collinear was proposed by Moser. Small
values of d (viz., $d \leq\ 3$) resulted in elegant symmetric
solutions. It is shown that this does not remain the case in 4
dimensions where at most 43 nodes can be chosen, and these must not
include the center node.
CS-TR-72-286
Report Number: CS-TR-72-288
Institution: Stanford University, Department of Computer Science
Title: Logic for Computable Functions: description of a machine implementation.
Author: Milner, Robin
Date: May 1972
Abstract:
This paper is primarily a user's manual for LCF, a proof-checking
program for a logic of computable functions proposed by Dana Scott in
1969 but unpublished by him. We use the name LCF also for the logic
itself, which is presented at the start of the paper. The
proof-checking program is designed to allow the user interactively to
generate formal proofs about computable functions and functionals over
a variety of domains, including those of interest to the computer
scientist - for example, integers, lists and computer programs and
their semantics. The user's task is alleviated by two features: a
subgoaling facility and a powerful simplification mechanism.
Applications include proofs of program correctness and in particular of
compiler correctness; these applications are not discussed herein, but
are illustrated in the papers referenced in this introduction.
CS-TR-72-288
Report Number: CS-TR-72-289
Institution: Stanford University, Department of Computer Science
Title: Lakoff on linguistics and natural logic.
Author: Wilks, Yorick A.
Date: June 1972
Abstract:
The paper examines and criticises Lakoff's notions of a natural logic
and of a generative semantics described in terms of logic. I argue that
the relationship of these notions to logic as normally understood is
unclear, but I suggest, in the course of the paper, a number of
possible interpretations of his thesis of generative semantics. I argue
further that on these interpretations the thesis (of Generative
Semantics) is false, unless it be taken as a mere notational variant of
Chomskyan theory. I argue, too, that Lakoff's work may provide a
service in that it constitutes a reductio ad absurdum of the
derivational paradigm of modern linguistics; and shows, inadvertently,
that only a system with the ability to reconsider its own inferences
can do the job that Lakoff sets up for linguistic enquiry -- that is to
say, only an "artificial intelligence" system.
CS-TR-72-289
Report Number: CS-TR-72-290
Institution: Stanford University, Department of Computer Science
Title: Adverbs and belief.
Author: Schank, Roger C.
Date: June 1972
Abstract:
The treatment of a certain class of adverbs in conceptual
representation is given. Certain adverbs are shown to be representative
of complex belief structures. These adverbs serve as pointers that
explain where the sentence that they modify belongs in a belief
structure.
CS-TR-72-290
Report Number: CS-TR-72-291
Institution: Stanford University, Department of Computer Science
Title: Some combinatorial lemmas.
Author: Knuth, Donald E.
Date: June 1972
Abstract:
This report consists of several short papers which are completely
independent of each other: 1. "Wheels Within Wheels." Every finite
strongly connected digraph is either a single point or a set of n
smaller strongly connected digraphs joined by an oriented cycle of
length n. This result is proved in somewhat stronger form, and two
applications are given. 2. "An Experiment in Optimal Sorting." An
unsuccessful attempt, to sort 13 or 14 elements in less comparisons
than the Ford-Johnson algorithm, is described. (Coauthor: E.B.
Kaehler.) 3. "Permutations With Nonnegative Partial Sums." A sequence
of s positive and t negative real numbers, whose sum is zero, can be
arranged in at least (s+t-1)! and at most (s+t)!/(max(s,t)+1) <
2(s+t-1)! ways such that the partial sums $x_1 + ... + x_j$ are
nonnegative for $1 \leq\ j \leq\ s+t$.
CS-TR-72-291
Report Number: CS-TR-72-292
Institution: Stanford University, Department of Computer Science
Title: Selected combinatorial research problems.
Author: Chvatal, Vaclav
Author: Klarner, David A.
Author: Knuth, Donald E.
Date: June 1972
Abstract:
Thirty-seven research problems are described, covering a wide range of
combinatorial topics. Unlike Hilbert's problems, most of these are not
especially famous and they might be "do-able" in the next few years.
(Problems 1-16 were contributed by Klarner, 17-26 by Chvatal, 27-37 by
Knuth. All cash awards are Chvatal's responsibility.)
CS-TR-72-292
Report Number: CS-TR-72-299
Institution: Stanford University, Department of Computer Science
Title: semantic categories of nominals for conceptual dependency analysis of natural language.
Author: Russell, Sylvia Weber
Date: July 1972
Abstract:
A system for the semantic categorization of conceptual objects
(nominals) is provided. The system is intended to aid computer
understanding of natural language. Specific implementations for
"noun-pairs" and prepositional phrases are offered.
CS-TR-72-299
Report Number: CS-TR-72-300
Institution: Stanford University, Department of Computer Science
Title: Counterexample to a conjecture of Fujii, Kasami and Ninomiya.
Author: Kaufman, Marc T.
Date: June 1972
Abstract:
In a recent paper [1], Fujii, Kasami and Ninomiya presented a procedure
for the optimal scheduling of a system of unit length tasks represented
as a directed acyclic graph on two identical processors. The authors
conjecture that the algorithm can be extended to the case where more
than two processors are employed. This note presents a counterexample
to that conjecture. [1] Fujii, M., T. Kasami and K. Ninomiya, "Optimal
Sequencing of Two Equivalent Processors, SIAM J. Appl. Math., Vol. 17,
No.4, July 1969, pp. 784-789.
CS-TR-72-300
Report Number: CS-TR-72-301
Institution: Stanford University, Department of Computer Science
Title: Product form of the Cholesky factorization for large-scale linear programming.
Author: Saunders, Michael A.
Date: August 1972
Abstract:
A variation of Gill and Murray's version of the revised simplex
algorithm is proposed, using the Cholesky factorization ${BB}^T =
{LDL}^T$ where B is the usual basis, D is diagonal and L is unit lower
triangular. It is shown that during change of basis L may be updated in
product form. As with standard methods using the product form of
inverse, this allows use of sequential storage devices for accumulating
updates to L. In addition the favorable numerical properties of Gill
and Murray's algorithm are retained. Cloase attention is given to
efficient out-of-core implementation. In the case of large-scale
block-angular problems, the updates to L will remain very sparse for
all iterations.
CS-TR-72-301
Report Number: CS-TR-72-304
Institution: Stanford University, Department of Computer Science
Title: Richardson's non-stationary matrix iterative procedure.
Author: Anderssen, Robert S.
Author: Golub, Gene H.
Date: August 1972
Abstract:
Because of its simplicity, Richardson's non-stationary iterative scheme
is a potentially powerful method for the solution of (linear) operator
equations. However, its general application has more or less been
blocked by (a) the problem of constructing polynomials, which deviate
least from zero on the spectrum of the given operator, and which are
required for the determination of the iteration parameters of the
non-stationary method, and (b) the instability of this scheme with
respect to rounding error effects. Recently, these difficulties were
examined in two Russian papers. In the first, Lebedev [1969]
constructed polynomials which deviate least from zero on a set of
subintervals of the real axis which contains the spectrum of the given
operator. In the second, Lebedev and Finogenov [1971] gave an ordering
for the iteration parameters of the non-stationary Richardson scheme
which makes it a stable numerical process. Translation of these two
papers appear as Appendices 1 and 2, respectively, in this report. The
body of the report represents an examination of the properties of
Richardson's non-stationary scheme and the pertinence of the two
mentioned papers along with the results of numerical experimentation
testing the actual implementation of the procedures given in them.
CS-TR-72-304
Report Number: CS-TR-72-306
Institution: Stanford University, Department of Computer Science
Title: A bibliography on computer graphics.
Author: Pollack, Bary W.
Date: August 1972
Abstract:
This bibliography includes the most important works describing the
softwre aspects of generative computer graphics. As such it will be of
most usefullness to researchers, system designers and programmers whose
interests and responsibilities include the development of software
systems for interactive graphical input/output. The bibliography does
include a short section on hardware systems. Image analysis, pattern
recognition and picture processing and related fields are rather poorly
represented here. The interested researcher is referred to journals in
this field and to the reports of Azriel Rosenfeld, University of
Maryland, which include excellent bibliographic references.
CS-TR-72-306
Report Number: CS-TR-72-307
Institution: Stanford University, Department of Computer Science
Title: Hadamard transform for speech wave analysis.
Author: Tanaka, Hozumi
Date: August 1972
Abstract:
Two methods of speech wave analysis using the Hadamard transform are
discussed. The first method is a direct application of the Hadamard
transform for speech waves. The reason this method yields poor results
is discussed. The second method is the application of the Hadamard
transform to a log-magnitude frequency spectrum. After the application
of the Fourier transform the Hadamard transform is applied to detect a
pitch period or to get a smoothed spectrum. This method shows some
positive aspects of the Hadamard transform for the analysis of a speech
wave with regard to the reduction of processing time required for
smoothing, but at the cost of precision. A formant tracking program for
voiced speech is implemented by using this method and an edge following
technique used in scene analysis.
CS-TR-72-307
Report Number: CS-TR-72-308
Institution: Stanford University, Department of Computer Science
Title: Recent developments in SAIL, an algol-based language for artificial intelligence.
Author: Feldman, Jerome A.
Author: Low, James R.
Author: Swinehart, Daniel C.
Author: Taylor, Russell H.
Date: November 1972
Abstract:
New features added to SAIL, an ALGOL based language for the PDP-10, are
discussed. The features include: procedure variables; multiple
processes; coroutines; a limited form of backtracking; an event
mechanism for inter-process communication; and matching procedures, a
new way of searching the LEAP associative data base.
CS-TR-72-308
Report Number: CS-TR-72-310
Institution: Stanford University, Department of Computer Science
Title: Anomalies in scheduling unit-time tasks.
Author: Kaufman, Marc T.
Date: June 1972
Abstract:
In this paper we examine the problem of scheduling a set of tasks on a
system with a number of identical processors. Several timing anomalies
are known to exist for the general case, in which the execution time
can increase when inter-task constraints are removed or processors are
added. It is shown that these anomalies also exist when tasks are
restricted to be of equal (unit) length. Several, increasingly
restrictive, heuristic scheduling algorithms are reviewed. The "added
processor" anomaly is shown to persist through all of them, though in
successively weaker form.
CS-TR-72-310
Report Number: CS-TR-72-317
Institution: Stanford University, Department of Computer Science
Title: An analysis of drum storage units.
Author: Fuller, Samuel H.
Author: Baskett, Forest
Date: August 1972
Abstract:
This article discusses the modeling and analysis of drum-like storage
units. Two common forms of drum organizations and two common scheduling
disciplines are considered: the file drum and the paging drum;
first-in-first-out (FIFO) scheduling and shortest-latency-time-first
(SLTF) scheduling. The modeling of the I/O requests to the drum is an
important aspect of this analysis. Measurements are presented to
indicate that it is realistic to model requests for records, or blocks
of information to a file drum, as requests that have starting addresses
uniformly distributed around the circumference of the drum and transfer
times that are exponentially distributed with a mean of 1/2 to 1/3 of a
drum revolution. The arrival of I/O requests is first assumed to be a
Poisson process and then generalized to the case of a computer system
with a finite degree of multiprogramming. An exact analysis of all the
models except the SLTF file drum is presented; in this case the
complexity of the drum organization has forced us to accept an
approximate analysis. In order to examine the error introduced into the
analysis of the SLTF file drum by our approximations, the results of
the analytic models are compared to a simulation model of the SLTF file
drum.
CS-TR-72-317
Report Number: CS-TR-72-318
Institution: Stanford University, Department of Computer Science
Title: Constructive graph labeling using double cosets.
Author: Brown, Harold
Author: Masinter, Larry M.
Author: Hjelmeland, Larry
Date: October 1972
Abstract:
Two efficient computer implemented algorithms are presented for
explicitly constructing all distinct labelings of a graph G with a set
of (not necessarily distinct) labels L, given the symmetry group B of
G. Two recursive reductions of the problem and a precomputation
involving certain orbits of stabilizer subgroups are the techniques
used by the algorithm. Moreover, for each labeling, the subgroup of B
which preserves that labeling is calculated.
CS-TR-72-318
Report Number: CS-TR-72-319
Institution: Stanford University, Department of Computer Science
Title: On a characterization of the best $\ell_2$ scaling of a matrix.
Author: Golub, Gene H.
Author: Varah, James M.
Date: October 1972
Abstract:
This paper is concerned with best two-sided scaling of a general square
matrix, and in particular with a certain characterization of that best
scaling: namely that the first and last singular vectors (on left and
right) of the scaled matrix have components of equal modulus.
Necessity, sufficiency, and its relation with other characterizations
are discussed. Then the problem of best scaling for rectangular
matrices is introducted and a conjecture made regarding a possible best
scaling. The conjecture is verified for some special cases.
CS-TR-72-319
Report Number: CS-TR-72-320
Institution: Stanford University, Department of Computer Science
Title: Winged edge polyhedron representation.
Author: Baumgart, Bruce G.
Date: October 1972
Abstract:
A winged edge polyhedron representation is stated and a set of
primitives that preserve Euler's F-E+V = 2 equation are explained.
Present use of this representation in artificial intelligence for
computer graphics and world modeling is illustrated and its intended
future application to computer vision is described.
CS-TR-72-320
Report Number: CS-TR-72-322
Institution: Stanford University, Department of Computer Science
Title: Methods for modifying matrix factorizations.
Author: Gill, Phillip E.
Author: Golub, Gene H.
Author: Murray, Walter A.
Author: Saunders, Michael A.
Date: November 1972
Abstract:
In recent years several algorithms have appeared for modifying the
factors of a matrix following a rank-one change. These methods have
always been given in the context of specific applications and this has
probably inhibited their use over a wider field. In this report several
methods are described for modifying Cholesky factors. Some of these
have been published previously while others appear for the first time.
In addition, a new algorithm is presented for modifying the complete
orthogonal factorization of a general matrix, from which the
conventional QR factors are obtained as a special case. A uniform
notation has been used and emphasis has been placed on illustrating the
similarity between different methods.
CS-TR-72-322
Report Number: CS-TR-72-323
Institution: Stanford University, Department of Computer Science
Title: A fast method for solving a class of tri-diagonal linear systems.
Author: Malcolm, Michael A.
Author: Palmer, John
Date: November 1972
Abstract:
The solution of linear systems having real, symmetric, diagonally
dominant, tridiagonal coefficient matrices with constant diagonals is
considered. It is proved that the diagonals of the LU decomposition of
the coefficient matrix rapidly converge to full floating-point
precision. It is also proved that the computed LU decomposition
converges when floating-point arithmetic is used and that the limits of
the LU diagonals using floating point are roughly within machine
precision of the limits using real arithmetic. This fact is exploited
to reduce the number of floating-point operations required to solve a
linear system from 8n-7 to 5n+2k-3, where k is much less than n, the
order of the matrix. If the elements of the sub- and superdiagonals are
1, then only 4n+2k-3 operations are needed. The entire LU decomposition
takes k words of storage, and considerable savings in array
subscripting are achieved. Upper and lower bounds on k are obtained in
terms of the ratio of the coefficient matrix diagonal constants and
parameters of the floating-point number system. Various generalizations
of these results are discussed.
CS-TR-72-323
Report Number: CS-TR-72-325
Institution: Stanford University, Department of Computer Science
Title:
Review of Hubert Dreyfus' What Computers Can't Do: a Critique of
Artificial Reason (Harper & Row, New York, 1972).
Author: Buchanan, Bruce G.
Date: November 1972
Abstract:
The recent book $\underline{What Computers Can't Do}$ by Hubert Dreyfus
is an attack on artificial intelligence research. This review takes the
position that the philosophical content of the book is interesting, but
that the attack on artificial intelligence is not well reasoned.
CS-TR-72-325
Report Number: CS-TR-72-326
Institution: Stanford University, Department of Computer Science
Title:
Can expert judges, using transcripts of teletyped psychiatric
interviews, distinguish human paranoid patients from a computer
simulation of paranoid processes?
Author: Colby, Kenneth Mark
Author: Hilf, Franklin Dennis
Date: December 1972
Abstract:
Expert judges (psychiatrists and computer scientists) could not
correctly distinguish a simulation model of paranoid processes from
actual paranoid patients.
CS-TR-72-326
Report Number: CS-TR-72-328
Institution: Stanford University, Department of Computer Science
Title: An efficient implementation of Edmonds' maximum matching algorithm.
Author: Gabow, Harold N.
Date: June 1972
Abstract:
A matching in a graph is a collection of edges, no two of which share a
vertex. A maximum matching contains the greatest number of edges
possible. This paper presents an efficient implementation of Edmonds'
algorithm for finding maximum matchings. The computation time is
proportional to $V^3$, where V is the number of vertices; previous
algorithms have computation time proportional to $V^4$. The
implementation avoids Edmonds' blossom reduction by using pointers to
encode the structure of alternating paths.
CS-TR-72-328
Report Number: CS-TR-73-330
Institution: Stanford University, Department of Computer Science
Title: Axioms and theorems for integers, lists and finite sets in LCF.
Author: Newey, Malcolm C.
Date: January 1973
Abstract:
LCF (Logic for Computable Functions) is being promoted as a formal
language suitable for the discussion of various problems in the
Mathematical Theory of Computation (MTC). To this end, several examples
of MTC problems have been formalised and proofs have been exhibited
using the LCF proof-checker. However, in these examples, there has been
a certain amount of ad-hoc-ery in the proofs; namely, many mathematical
theorems have been assumed without proof and no axiomatisation of the
mathematical domains involved was given. This paper describes a
suitable mathematical environment for future LCF experiments and its
axiomatic basis. The environment developed, deemed appropriate for such
experiments, consists of a large body of theorems from the areas of
integer arithmetic, list manipulation and finite set theory.
CS-TR-73-330
Report Number: CS-TR-73-331
Institution: Stanford University, Department of Computer Science
Title: The computing time of the Euclidean algorithm.
Author: Collins, George E.
Date: January 1973
Abstract:
The maximum, minimum and average computing times of the classical
Euclidean algorithm for the greatest common divisor of two integers are
derived, to within codominance, as functions of the lengths of the two
inputs and the output.
CS-TR-73-331
Report Number: CS-TR-73-332
Institution: Stanford University, Department of Computer Science
Title: Models of LCF.
Author: Milner, Robin
Date: January 1973
Abstract:
LCF is a deductive system for computable functions proposed by D. Scott
in 1969 in an unpublished memorandum. The purpose of the present paper
is to demonstrate the soundness of the system with respect to certain
models, which are partially ordered domains of continuous functions.
This demonstration was supplied by Scott in his memorandum; the present
paper is merely intended to make this work more accessible.
CS-TR-73-332
Report Number: CS-TR-73-333
Institution: Stanford University, Department of Computer Science
Title: On the power of programming features.
Author: Chandra, Ashok K.
Author: Manna, Z ohar
Date: January 1973
Abstract:
We consider the power of several programming features such as counters,
pushdown stacks, queues, arrays, recursion and equality. In this study
program schemas are used as the model for computation. The relations
between the powers of these features is completely described by a
comparison diagram.
CS-TR-73-333
Report Number: CS-TR-73-334
Institution: Stanford University, Department of Computer Science
Title: URAND: a universal random number generator.
Author: Malcolm, Michael A.
Author: Moler, Cleve B.
Date: January 1973
Abstract:
A subroutine for generating uniformly-distributed floating-point
numbers in the interval [O,1) is presented in ANSI standard Fortran.
The subroutine, URAND, is designed to be relatively machine
independent. URAND has undergone minimal testing on various machines
and is thought to work properly on any machine having binary integer
number representation, integer multiplication modulo m and integer
addition either modulo m or yielding at least ${log}_2$ (m) significant
bits, where m is some integral power of 2. Upon the first call of
URAND, the value of m is automatically determined and appropriate
constants for a linear congruential generator are computed following
the suggestions of D. E. Knuth, volume 2. URAND is guaranteed to have a
full-length cycle. Readers are invited to apply their favorite
statistical tests to URAND, using any binary machine, and report the
results to the authors.
CS-TR-73-334
Report Number: CS-TR-73-335
Institution: Stanford University, Department of Computer Science
Title: Computation of the stationary distribution of an infinite Markov matrix.
Author: Golub, Gene H.
Author: Seneta, Eugene
Date: January 1973
Abstract:
An algorithm is presented for computing the unique stationary
distribution of an infinite stochastic matrix possessing at least one
column whose elements are bounded away from zero. Elementwise
convergence rate is discussed by means of two examples.
CS-TR-73-335
Report Number: CS-TR-73-337
Institution: Stanford University, Department of Computer Science
Title: Aesthetics systems.
Author: Gips, James
Author: Stiny, George
Date: January 1973
Abstract:
The formal structure of aesthetics systems is defined. Aesthetics
systems provide for the essential tasks of interpretation and
evaluation in aesthetic analysis. Kolmogorov's formulation of
information theory is applicable. An aesthetics system for a class of
non-representational, geometric paintings and its application to three
actual paintings is described in the Appendix.
CS-TR-73-337
Report Number: CS-TR-73-338
Institution: Stanford University, Department of Computer Science
Title: A finite basis theorem revisited.
Author: Klarner, David A.
Date: February 1973
Abstract:
Let S denote a set of k-dimensional boxes each having integral sides.
Let $\Gamma$(S) denote the set of all boxes which can be filled
completely with translates of elements of S. It is shown here that S
contains a finite subset B such that $\Gamma$(B) = $\Gamma$(S). This
result was proved for k = 1,2 in an earlier paper, but the proof for k
> 2 contained an error.
CS-TR-73-338
Report Number: CS-TR-73-339
Institution: Stanford University, Department of Computer Science
Title: Computation of the limited information maximum likelihood estimator.
Author: Dent, Warren T.
Author: Golub, Gene H.
Date: February 1973
Abstract:
Computation of the Limited Information Maximum Likelihood Estimator
(LIMLE) of the set of coefficients in a single equation of a system of
interdependent relations is sufficiently complicated to detract from
other potentially interesting properties. Although for finite samples
the LIMLE has no moments, asymptotically it remains normally
distributed and retains other properties associated with maximum
likelihood. The most extensive application of the estimator has been
made in the Brookings studies. We believe that current methods of
estimation are clumsy, and present a numerically stable estimation
schema based on Householder transformations and the singular value
decomposition. The analysis permits a convenient demonstration of
equivalence with the Two Stage Least Squares Estimator (TSLSE) in the
instance of just identification.
CS-TR-73-339
Report Number: CS-TR-73-340
Institution: Stanford University, Department of Computer Science
Title: Notes on a problem involving permutations as subsequences.
Author: Newey, Malcolm C.
Date: March 1973
Abstract:
The problem (attributed to R. M. Karp by Knuth) is to describe the
sequences of minimum length which contain, as subsequences, all the
permutations of an alphabet of n symbols. This paper catalogs some of
the easy observations on the problem and proves that the minimum
lengths for n=5, n=6 & n=7 are 19, 28 and 39 respectively. Also
presented is a construction which yields (for n>2) many appropriate
sequences of length $n^2$-2n+4 so giving an upper bound on length of
minimum strings which matches exactly all known values.
CS-TR-73-340
Report Number: CS-TR-73-341
Institution: Stanford University, Department of Computer Science
Title: A heuristic approach to program verification.
Author: Katz, Shmuel M.
Author: Manna, Z ohar
Date: March 1973
Abstract:
We present various heuristic techniques for use in proving the
correctness of computer programs. The techniques are designed to obtain
automatically the "inductive assertions" attached to the loops of the
program which previously required human "understanding" of the
program's performance. We distinguish between two general approaches:
one in which we obtain the inductive assertion by analyzing predicates
which are known to be true at the entrances and exits of the loop
($underline{top-down}$ approach), and another in which we generate the
inductive assertion directly from the statements of the loop
($underline{bottom-up}$ approach).
CS-TR-73-341
Report Number: CS-TR-73-342
Institution: Stanford University, Department of Computer Science
Title: Matroid partitioning.
Author: Knuth, Donald E.
Date: March 1973
Abstract:
This report discusses a modified version of Edmonds's algorithm for
partitioning of a set into subsets independent in various given
matroids. If ${\cal M}_1$,...,${\cal M}_k$ are matroids defined on a
finite set E, the algorithm yields a simple necessary and sufficient
condition for whether or not the elements of E can be colored with k
colors such that (i) all elements of color j are independent in ${\cal
M}_j$, and (ii) the number of elements of color j lies between given
limits, $n_j \leq \| E_j \| \leq {n'}_j$. The algorithm either finds
such a coloring or it finds a proof that none exists, after making at
most $n^3$ + $n^2$k tests of independence in the given matroids, where
n is the number of elements in E.
CS-TR-73-342
Report Number: CS-TR-73-344
Institution: Stanford University, Department of Computer Science
Title: The fourteen primitive actions and their inferences.
Author: Schank, Roger C.
Date: March 1973
Abstract:
In order to represent the conceptual information underlying a natural
language sentence, a conceptual structure has been established that
uses the basic actor-action-object framework. It was the intent that
these structures have only one representation for one meaning,
regardless of the semantic form of the sentence being represented.
Actions were reduced to their basic parts so as to effect this. It was
found that only fourteen basic actions were needed as building blocks
by which all verbs can be represented. Each of these actions has a set
of actions or states which can be inferred when they are present.
CS-TR-73-344
Report Number: CS-TR-73-345
Institution: Stanford University, Department of Computer Science
Title: The minimum root separation of a polynomial.
Author: Collins, George E.
Author: Horowitz, Ellis
Date: April 1973
Abstract:
The minimum root separation of a complex polynomial A is defined as the
minimum of the distances between distinct roots of A. For polynomials
with Gaussian integer coefficients and no multiple roots, three lower
bounds are derived for the root separation. In each case the bound is a
function of the degree, n, of A and the sum, d, of the absolute values
of the coefficients of A. The notion of a semi-norm for a commutative
ring is defined, and it is shown how any semi-norm can be extended to
polynomial rings and matrix rings, obtaining a very general analogue of
Hadamard's determinant theorem.
CS-TR-73-345
Report Number: CS-TR-73-347
Institution: Stanford University, Department of Computer Science
Title: Multidimensional analysis in evaluating a simulation of paranoid thought.
Author: Colby, Kenneth Mark
Author: Hilf, Franklin Dennis
Date: May 1973
Abstract:
The limitations of Turing's Test as an evaluation procedure are
reviewed. More valuable are tests which ask expert judges to make
ratings along multiple dimensions essential to the model. In this way
the model's weaknesses become clarified and the model builder learns
where the model must be improved.
CS-TR-73-347
Report Number: CS-TR-73-346
Institution: Stanford University, Department of Computer Science
Title: The rationale for computer based treatment of language difficulties in nonspeaking autistic children.
Author: Colby, Kenneth Mark
Date: March 1973
Abstract:
The principles underlying a computer-based treatment method for
language acquisition in nonspeaking autistic children are described.
The main principle involves encouragement of exploratory learning with
minimum adult interference.
CS-TR-73-346
Report Number: CS-TR-73-348
Institution: Stanford University, Department of Computer Science
Title: High order finite difference solution of differential equations.
Author: Pereyra, Victor
Date: April 1973
Abstract:
These seminar notes give a detailed treatment of finite difference
approximations to smooth nonlinear two-point boundary value problems
for second order differential equations. Consistency, stability,
convergence, and asymptotic expansions are discussed. Most results are
stated in such a way as to indicate extensions to more general
problems. Successive extrapolations and deferred corrections are
described and their implementations are explored thoroughly. A very
general deferred correction generator is developed and it is employed
in the implementation of a variable order, variable (uniform) step
method. Complete FORTRAN programs and extensive numerical experiments
and comparisons are included together with a set of 48 references.
CS-TR-73-348
Report Number: CS-TR-73-349
Institution: Stanford University, Department of Computer Science
Title:
Two papers on the selection problem: Time Bounds for Selection [by
Manual Blum, Robert W. Floyd, Vaughan Pratt, Ronald L. Rivest, and
Robert E. Tarjan] and Expected Time Bounds for Selection [by Robert W.
Floyd and Ronald L. Rivest].
Author: Blum, Manual
Author: Floyd, Robert W.
Author: Pratt, Vaughan R.
Author: Rivest, Ronald L.
Author: Tarjan, Robert Endre
Date: April 1973
Abstract:
(1) The number of comparisons required to select the i-th smallest of n
numbers is shown to be at most a linear function of n by analysis of a
new selection algorithm -- PICK. Specifically, no more than 5.4305 n
comparisons are ever required. This bound is improved for extreme
values of i, and a new lower bound on the requisite number of
comparisons is also proved. (2) A new selection algorithm is presented
which is shown to be very efficient on the average, both theoretically
and practically. The number of comparisons used to select the i-th
smallest of n numbers is n + min(i,n-i) + o(n). A lower bound within 9%
of the above formula is also derived.
CS-TR-73-349
Report Number: CS-TR-73-350
Institution: Stanford University, Department of Computer Science
Title: An almost-optimal algorithm for the assembly line scheduling problem.
Author: Kaufman, Marc T.
Date: January 1973
Abstract:
This paper considers a solution to the multiprocessor scheduling
problem for the case where the ordering relation between tasks can be
represented as a tree. Assume that we have n identical processors, and
a number of tasks to perform. Each task $T_i$ requires an amount of
time ${\mu}_i$ to complete, 0 < ${\u}_i \leq$ k, so that k is an
upper bound on task length. Tasks are indivisible, so that a processor
once assigned must remain assigned until the task completes (no
preemption). Then the "longest path" scheduling method is
almost-optimal in the following sense: Let $\omage$ be the total time
required to process all of the tasks by the "longest path" algorithm.
Let ${\omega}_o$ be the minimal time in which all of the tasks can be
processed. Let ${\omega}_p$ be the minimal time to process all of the
tasks if arbitrary preemption of processors is allowed. Then:
${\omega}_p \leq {\omega}_o \leq \omega \leq {\omega}_p$ + k - k/n,
where n is the number of processors available to any of the algorithms.
CS-TR-73-350
Report Number: CS-TR-73-351
Institution: Stanford University, Department of Computer Science
Title: Performance of an I/O channel with multiple paging drums (digest edition).
Author: Fuller, Samuel H.
Date: August 1972
Abstract:
For rotating storage units, a paging drum organization is known to
offer substantially better response time to I/O requests than is a more
conventional (file) organization [Abate and Dubner, 1969; Fuller and
Baskett, 1972]. When several, asynchronous paging drums are attached to
a single I/O channel, however, much of the gain in response time due to
the paging organization is lost; this article investigates the reasons
for this loss in performance. A model of an I/O channel with multiple
paging drums is presented and we embed into the model a Markov chain
that closely approximates the behavior of the I/O channel. The analysis
then leads to the moment generating function of sector queue size and
the Laplace-Stieltjes transform of the waiting time. A significant
observation is that the expected waiting time for an I/O request to a
drum can be divided into two terms: one independent of the load of I/O
requests to the drum and another that monotonically increases with
increasing load. Moreover, the load varying term of the waiting time is
nearly proportional to (2 - l/k) where k is the number of drums
connected to the I/O channel. The validity of the Markov chain
approximation is examined in several cases by a comparison of the
analytic results to the actual performance of an I/O channel with
several paging drums.
CS-TR-73-351
Report Number: CS-TR-73-352
Institution: Stanford University, Department of Computer Science
Title: The expected difference between the SLTF and MTPT drum scheduling disciplines (digest edition).
Author: Fuller, Samuel H.
Date: August 1972
Abstract:
This report is a sequel to an earlier report [Fuller, 1971] that
develops a minimal-total-processing-time (MTPT) drum scheduling
algorithm. A quantitative comparison between MTPT schedules and
shortest-latency-time-first (SLTF) schedules, commonly acknowledged as
good schedules for drum-like storage units, is presented here. The
analysis develops an analogy to random walks and proves several
asymptotic properties of collections of records on drums. These
properties are specialized to the MTPT and SLTF algorithms and it is
shown that for sufficiently large sets of records, the expected
processing time of a SLTF schedule is longer than a MTPT schedule by
the expected record length. The results of a simulation study are also
presented to show the difference in MTPT and SLTF schedules for small
sets of records and for situations not covered in the analytic
discussion.
CS-TR-73-352
Report Number: CS-TR-73-353
Institution: Stanford University, Department of Computer Science
Title: Random arrivals and MTPT disk scheduling disciplines.
Author: Fuller, Samuel H.
Date: August 1972
Abstract:
This article investigates the application of
minimal-total-processing-time (MTPT) scheduling disciplines to rotating
storage units when random arrival of requests is allowed. Fixed-head
drum and moving-head disk storage units are considered and particular
emphasis is placed on the relative merits of the MTPT scheduling
discipline with respect to the shortest-latency-time-first (SLTF)
scheduling discipline. The data presented are the results of simulation
studies. Situations are discovered in which the MTPT discipline is
superior to the SLTF discipline, and situations are also discovered in
which the opposite is true. An implementation of the MTPT scheduling
algorithm is presented and the computational requirements of the
algorithm are discussed. It is shown that the sorting procedure is the
most time consuming phase of the algorithm.
CS-TR-73-353
Report Number: CS-TR-73-354
Institution: Stanford University, Department of Computer Science
Title: The number of SDR's in certain regular systems.
Author: Klarner, David A.
Date: April 1973
Abstract:
Let ($a_1$,...,$a_k$) = $\bar{a}$ denote a vector of numbers, and let
C($\bar{a}$,n) denote the n $\times$ n cyclic matrix having
($a_1$,...,$a_k$,0,...,0) as its first row. It is shown that the
sequences (det C($\bar{a}$,n): n = k,k+1,...) and (per C($\bar{a}$,n):
n = k,k+1,...) satisfy linear homogeneous difference equations with
constant coefficients. The permanent, per C, of a matrix C is defined
like the determinant except that one forgets about ${(-1)}^{sign \pi}$
where $\pi$ is a permutation.
CS-TR-73-354
Report Number: CS-TR-73-355
Institution: Stanford University, Department of Computer Science
Title: An analysis of central processor scheduling in multiprogrammed computer systems (digest edition).
Author: Price, Thomas G.
Date: October 1972
Abstract:
A simple finite source model is used to gain insight into the effect of
central processor scheduling in multiprogrammed computer systems. CPU
utilization is chosen as the measure of performance and this decision
is discussed. A relation between CPU utilization and flow time is
developed. It is shown that the shortest-remaining-processing-time
discipline maximizes both CPU utilization and I/O utilization for the
queueing model M/G/1/N. An exact analysis of processor utilization
using shortest-remaining-processing-time scheduling for systems with
two jobs is given and it is observed that the processor utilization is
independent of the form of the processing time distribution. The effect
of the CPU processing time distribution on performance is discussed.
For first-come-first-served scheduling, it is shown that distributions
with the same mean and variance can yield significantly different
processor utilizations and that utilization may or may not
significantly decrease with increasing variance. The results are used
to compare several scheduling disciplines of practical interest. An
approximate expression for CPU utilization using
shortest-remaining-processing-time scheduling in systems with N jobs is
given.
CS-TR-73-355
Report Number: CS-TR-73-356
Institution: Stanford University, Department of Computer Science
Title: MLISP2.
Author: Smith, David Canfield
Author: Enea, Horace J.
Date: May 1973
Abstract:
MLISP2 is a high-level programming language based on LISP. Features: 1.
The notation of MLISP. 2. Extensibility---the ability to extend the
language and to define new languages. 3. Pattern matching---the ability
to match input against context free or sensitive patterns. 4.
Backtracking--the ability to set decision points, manipulate contexts
and backtrack.
CS-TR-73-356
Report Number: CS-TR-73-357
Institution: Stanford University, Department of Computer Science
Title: A conceptually based sentence paraphraser.
Author: Goldman, Neil M.
Author: Riesbeck, Christopher K.
Date: May 1973
Abstract:
This report describes a system of programs which performs natural
language processing based on an underlying language free (conceptual)
representation of meaning. This system is used to produce sentence
paraphrases which demonstrate a form of understanding with respect to a
given context. Particular emphasis has been placed on the major
subtasks of language analysis (mapping natural language into conceptual
structures) and language generation (mapping conceptual structures into
natural language), and on the interaction between these processes and a
conceptual memory model.
CS-TR-73-357
Report Number: CS-TR-73-358
Institution: Stanford University, Department of Computer Science
Title: Inference and the computer understanding of natural language.
Author: Schank, Roger C.
Author: Rieger, Charles J., III
Date: May 1973
Abstract:
The notion of computer understanding of natural language is examined
relative to inference mechanisms designed to function in a
language-free deep conceptual base (Conceptual Dependency). The
conceptual analysis of a natural language sentence into this conceptual
base, and the nature of the memory which stores and operates upon these
conceptual structures are described from both theoretical and practical
standpoints. The various types of inferences which can be made during
and after the conceptual analysis of a sentence are defined, and a
functioning program which performs these inference tasks is described.
Actual computer output is included.
CS-TR-73-358
Report Number: CS-TR-73-360
Institution: Stanford University, Department of Computer Science
Title: Open, closed, and mixed networks of queues with different classes of customers.
Author: Muntz, Richard R.
Author: Baskett, Forest, III
Date: August 1972
Abstract:
We derive the joint equilibrium distribution of queue sizes in a
network of queues containing N service centers and R classes of
customers. The equilibrium state probabilities have the general form:
P(S) - Cd(S) $f_1$($x_1$)$f_2$($x_2$)...$f_N$($x_N$) where S is the
state of the system, $x_i$ is the configuration of customers at the ith
service center, d(S) is a function of the state of the model, $f_i$ is
a function that depends on the type of the ith service center, and C is
a normalizing constant. We consider four types of service centers to
model central processors, data channels, terminals, and routing delays.
The queueing disciplines associated with these service centers include
first-come-first-served, processor sharing, no queueing, and
last-come-first-served. Each customer belongs to a single class of
customers while awaiting or receiving service at a service center but
may change classes and service centers according to fixed probabilities
at the completion of a service request. For open networks we consider
state dependent arrival processes. Closed networks are those with no
arrivals. A network may be closed with respect to some classes of
customers and open with respect to other classes of customers. At three
of the four types of service centers, the service times of customers
are governed by probability distributions having rational Laplace
transforms, different classes of customers having different
distributions. At first-come-first-served type service centers the
service time distribution must be identical and exponential for all
classes of customers. Many of the network results of Jackson on arrival
and service rate dependencies, of Posner and Bernholtz on different
classes of customers, and of Chandy on different types of service
centers are combined and extended in this paper. The results become
special cases of the model presented here. An example shows how
different classes of customers can affect models of computer systems.
Finally, we show that an equivalent model encompassing all of the
results involves only classes of customers with identical exponentially
distributed service times. All of the other structure of the first
model can be absorbed into the fixed probabilities governing the change
of class and change of service center of each class of customers.
CS-TR-73-360
Report Number: CS-TR-73-361
Institution: Stanford University, Department of Computer Science
Title: An algorithm for the construction of the graphs of organic molecules.
Author: Brown, Harold
Author: Masinter, Larry M.
Date: May 1973
Abstract:
A description and a formal proof of an efficient computer implemented
algorithm for the construction of graphs is presented. This algorithm,
which is part of a program for the automated analysis of organic
compounds, constructs all of the non-isomorphic, connected multi-graphs
based on a given degree sequence of nodes and which arise from a
relatively small "catolog" of certain canonical graphs. For the graphs
of the more common organic molecules, a catolog of most of the
canonical graphs is known, and the algorithm can produce all of the
distinct valence isomers of these organic molecules.
CS-TR-73-361
Report Number: CS-TR-73-364
Institution: Stanford University, Department of Computer Science
Title: Estimation of probability density using signature tables for appplications to pattern recognition.
Author: Thosar, Ravindra B.
Date: May 1973
Abstract:
Signature table training method consists of cumulative evaluation of a
function (such as a probability density) at pre-assigned co-ordinate
values of input parameters to the table. The training is conditional:
based on a binary valued "learning" input to a table which is compared
to the label attached to each training sample. Interpretation of an
unknown sample vector is then equivalent of a table look-up, i.e.
extraction of the function value stored at the proper co-ordinates.
Such a technique is very useful when a large number of samples must be
interpreted as in the case of speech recognition and the time required
for the training as well as for the recognition is at a premium.
However, this method is limited by prohibitive storage requirements,
even for a moderate number of parameters, when their relative
independence cannot be assumed. This report investigates the conditions
under which the higher dimensional probability density function can be
decomposed so that the density estimate is obtained by a hierarchy of
signature tables with consequent reduction in the storage requirement.
Practical utility of the theoretical results obtained in the report is
demonstrated by a vowel recognition experiment.
CS-TR-73-364
Report Number: CS-TR-73-365
Institution: Stanford University, Department of Computer Science
Title: Automatic program verification I: a logical basis and its implementation.
Author: Igarashi, Shigeru
Author: London, Ralph L.
Author: Luckham, David C.
Date: May 1973
Abstract:
Defining the semantics of programming languages by axioms and rules of
inference yields a deduction system within which proofs may be given
that programs satisfy specifications. The deduction system herein is
shown to be consistent and also deduction complete with respect to
Hoare's system. A subgoaler for the deduction system is described whose
input is a significant subset of Pascal programs plus inductive
assertions. The output is a set of verification conditions or lemmas to
be proved. Several non-trivial arithmetic and sorting programs have
been shown to satisfy specifications by using an interactive theorem
prover to automatically generate proofs of the verification conditions.
Additional components for a more powerful verification system are under
construction.
CS-TR-73-365
Report Number: CS-TR-73-368
Institution: Stanford University, Department of Computer Science
Title: The goals of linguistic theory revisited.
Author: Schank, Roger C.
Author: Wilks, Yorick A.
Date: May 1973
Abstract:
We examine the original goals of generative linguistic theory. We
suggest that these goals were well defined but misguided with respect
to their avoidance of the problem of modelling performance. With
developments such as Generative Semantics, it is no longer clear that
the goals are clearly defined. We argue that it is vital for
linguistics to concern itself with the procedures that humans use in
language. We then introduce a number of basic human competencies, in
the field of language understanding, understanding in context and the
use of inferential information, and argue that the modelling of these
aspects of language understanding requires procedures of a sort that
cannot be easily accomodated within the dominant paradigm. In
particular, we argue that the procedures that will be required in these
cases ought to be linguistic, and that the simple-minded importation of
techniques from logic may create a linguistics in which there cannot be
procedures of the required sort.
CS-TR-73-368
Report Number: CS-TR-73-369
Institution: Stanford University, Department of Computer Science
Title: The development of conceptual structures in children.
Author: Schank, Roger C.
Date: May 1973
Abstract:
Previous papers by the author have hypothesized that it is possible to
represent the meaning of natural language sentences using a framework
which has only fourteen primitive ACTs. This paper addresses the
problem of when and how these ACTs might be learned by children. The
speech of a child of age 2 is examined for possible knowledge of the
primitive ACTs as well as the conceptual relations underlying language.
It is shown that there is evidence that the conceptual structures
underlying language are probably complete by age 2. Next a child is
studied from birth to age 1. The emergence of the primitive ACTs and
the conceptual relations is traced. The hypothesis is made that the
structures that underlie and are necessary for language are present by
age 1.
CS-TR-73-369
Report Number: CS-TR-73-371
Institution: Stanford University, Department of Computer Science
Title: A review of "Structured Programming".
Author: Knuth, Donald E.
Date: June 1973
Abstract:
The recent book $\underline{Structured Programming} by 0. J. Dahl, E.
W. Dijkstra, and C. A. R. Hoare promises to have a significant impact
on computer science. This report contains a detailed review of the
topics treated in that book, in the form of three informal "open
letters" to the three authors. It is hoped that circulation of these
letters to a wider audience at this time will help to promote useful
discussion of the important issues.
CS-TR-73-371
Report Number: CS-TR-73-373
Institution: Stanford University, Department of Computer Science
Title: SAIL user manual.
Author: VanLehn, Kurt A.
Date: July 1973
Abstract:
SAIL is a high-level programming language for the PDP-10 computer. It
includes an extended ALGOL 60 compiler and a companion set of
execution-time routines. In addition to ALGOL, the language features:
(1) flexible linking to hand-coded machine language algorithms, (2)
complete access to the PDP-10 I/O facilities, (3) a complete system of
compile-time arithmetic and logic as well as a flexible macro system,
(4) user modifiable error handling, (5) backtracking, and (6) interrupt
facilities. Furthermore, a subset of the SAIL language, called LEAP,
provides facilities for (1) sets and lists, (2) an associative data
structure, (3) independent processes, and (4) procedure variables. The
LEAP subset of SAIL is an extension of the LEAP language, which was
designed by J. Feldman and P. Rovner, and implemented on Lincoln
Laboratory's TX-2 (see [Feldman & Rovner, "An Algol-Based
Associative Language," Communications of the ACM, v.12, no. 8 (Aug.
1969), pp.439-449]). The extensions to LEAP are partially described in
"Recent Developments is SAIL" (see [Feldman et al., Proceedings of the
AFIPS Fall Joint Computer Conference, 1972, pp. 1193-1202]). This
manual describes the SAIL language and the execution-time routines for
the typical SAIL user: a non-novice programmer with some knowledge of
ALGOL. It lies somewhere between being a tutorial and a reference
manual.
CS-TR-73-373
Report Number: CS-TR-73-376
Institution: Stanford University, Department of Computer Science
Title: Lower estimates for the error of best uniform approximation.
Author: Meinardus, Guenter
Author: Taylor, Gerald D.
Date: July 1973
Abstract:
In this paper the lower bounds of de La Vallee Poussin and Remes for
the error of best uniform approximation from a linear subspace are
generalized to give analogous estimates based on k points, k = 1,...,n.
CS-TR-73-376
Report Number: CS-TR-73-378
Institution: Stanford University, Department of Computer Science
Title: The optimum comb method of pitch period analysis of continuous digitized speech.
Author: Moorer, James Anderson
Date: July 1973
Abstract:
A new method of tracking the fundamental frequency of voiced speech is
described. The method is shown to be of similar accuracy as the
Cepstrum technique. Since the method involves only additions, no
multiplication, it is shown to be faster than the SIFT algorithm.
CS-TR-73-378
Report Number: CS-TR-73-382
Institution: Stanford University, Department of Computer Science
Title: Axiomatic approach to total correctness of programs.
Author: Manna, Z ohar
Author: Pnueli, Amir
Date: July 1973
Abstract:
We present here an axiomatic approach which enables one to prove by
formal methods that his program is "totally correct" (i.e., it
terminates and is logically correct -- does what it is supposed to do).
The approach is similar to Hoare's approach for proving that a program
is "partially correct" (i.e., that whenever it terminates it produces
correct results). Our extension to Hoare's method lies in the
possibility of proving correctness $underline{and}$ termination at
once, and in the enlarged scope of properties that can be proved by it.
CS-TR-73-382
Report Number: CS-TR-73-383
Institution: Stanford University, Department of Computer Science
Title: Natural language inference.
Author: Wilks, Yorick A.
Date: August 1973
Abstract:
The paper describes the way in which a Preference Semantics system for
natural language analysis and generation tackles a difficult class of
anaphoric inference problems (finding the correct referent for an
English pronoun in context): those requiring either analysis
(conceptual) knowledge of a complex sort, or requiring weak inductive
knowledge of the course of events in the real world. The method
employed converts all available knowledge to a canonical template form
and endeavors to create chains of non-deductive inferences from the
unknowns to the possible referents. Its method of selecting among
possible chains of inferences is consistent with the overall principle
of "semantic preference" used to set up the original meaning
representation, of which these anaphoric inference procedures are a
manipulation.
CS-TR-73-383
Report Number: CS-TR-73-384
Institution: Stanford University, Department of Computer Science
Title: The generation of French from a semantic representation.
Author: Herskovits, Annette
Date: August 1973
Abstract:
The report contains first a brief description of Preference Semantics,
a system of representation and analysis of the meaning structure of
natural language. The analysis algorithm which transforms phrases into
semantic items called templates has been considered in detail
elsewhere, so this report concentrates on the second phase of analysis,
which binds templates together into a higher level semantic block
corresponding to an English paragraph, and which, in operation,
interlocks with the French generation procedure. During this phase, the
semantic relations between templates are extracted, pronouns are
referred and those word disambiguations are done that require the
context of a whole paragraph. These tasks require items called
PARAPLATES which are attached to keywords such as prepositions,
subjunctions and relative pronouns. The system chooses the
representation which maximises a carefully defined "semantic density."
A system for the generation of French sentences is then described,
based on the recursive evaluation of procedural generation patterns
called STEREOTYPES. The stereotypes are semantically context sensitive,
are attached to each sense of English words and keywords and are
carried into the representation by the analysis procedure. The
representation of the meaning of words, and the versatility of the
stereotype format, allow for fine meaning distinctions to appear in the
French, and for the construction of French differing radically from the
English original.
CS-TR-73-384
Report Number: CS-TR-73-385
Institution: Stanford University, Department of Computer Science
Title: Recognition of continuous speech: segmentation and classification using signature table adaptation.
Author: Thosar, Ravindra B.
Date: September 1973
Abstract:
This report explores the possibility of using a set of features for
segmentation and recognition of continuous speech. The features are not
necessarily "distinctive" or minimal, in the sense that they do not
divide the phonemes into mutually exclusive subsets, and can have high
redundancy. This concept of feature can thus avoid apriori binding
between the phoneme categories to be recognized and the set of features
defined in a particular system. An adaptive technique is used to find
the probability of the presence of a feature. Each feature is treated
independently of other features. An unknown utterance is thus
represented by a feature graph with associated probabilities. It is
hoped that such a representation would be valuable for a
hypothesize-test paradigm as opposed to one which operates on a linear
symbolic input.
CS-TR-73-385
Report Number: CS-TR-73-386
Institution: Stanford University, Department of Computer Science
Title: A corner finder for visual feedback.
Author: Perkins, W. A.
Author: Binford, Thomas O.
Date: August 1973
Abstract:
In visual-feedback work often a model of an object and its approximate
location are known and it is only necessary to determine its location
and orientation more accurately. The purpose of the program described
herein is to provide such information for the case in which the model
is an edge or corner. Given a model of a line or a corner with two or
three edges, the program searches a TV window of arbitrary size looking
for one or all corners which match the model. A model-driven program
directs the search. It calls on another program to find all lines
inside the window. Then it looks at these lines and eliminates lines
which cannot match any of the model lines. It next calls on a program
to form vertices and then checks for a matching vertex. If this simple
procedure fails, the model-driver has two backup procedures. First it
works with the lines that it has and tries to form a matching vertex
(corner). If this fails, it matches parts of the model with vertices
and lines that are present and then takes a careful look in a small
region in which it expects to find a missing line. The program often
finds weak contrast edges in this manner. Lines are found by a global
method after the entire window has been scanned with the Hueckel edge
operator.
CS-TR-73-386
Report Number: CS-TR-73-387
Institution: Stanford University, Department of Computer Science
Title: Analysis of behavior of chemical molecules: rule formation on non-homogeneous classes of objects.
Author: Buchanan, Bruce G.
Author: Sridharan, Natesa S.
Date: August 1973
Abstract:
An information processing model of some important aspects of inductive
reasoning is presented within the context of one scientific discipline.
Given a collection of experimental (mass spectrometry) data from
several chemical molecules the computer program described here
separates the molecules into "well-behaved" subclasses and selects from
the space of all explanatory processes the "characteristic" processes
for each subclass. The definitions of "well-behaved" and
"characteristic" embody several heuristics which are discussed. Some
results of the program are discussed which have been useful to chemists
and which lend credibility to this approach.
CS-TR-73-387
Report Number: CS-TR-73-388
Institution: Stanford University, Department of Computer Science
Title: Interconnections for parallel memories to unscramble p-ordered vectors.
Author: Swanson, Roger C.
Date: May 1973
Abstract:
Several methods are being considered for storing arrays in a parallel
memory system so that various useful partitions of an array can be
fetched from the memory with a single access. Some of these methods
fetch vectors in an order scrambled from that required for a
computation. This paper considers the problem of unscrambling such
vectors when the vectors belong to a class called p-ordered vectors and
the memory system consists of a prime number of modules. Pairs of
interconnections are described that can unscramble p-ordered vectors in
a number of steps that grows as the square root of the number of
memories. Lower and upper bounds are given for the number of steps to
unscramble the worst case vector. The upper bound calculation that is
derived also provides an upper bound on the minimum diameter of a star
polygon with a fixed number of nodes and two interconnections. An
algorithm is given that has produced optimal pairs of interconnections
for all sizes of memory that have been tried. The algorithm appears to
find optimal pairs for all memory sizes, but no proof has yet been
found.
CS-TR-73-388
Report Number: CS-TR-73-390
Institution: Stanford University, Department of Computer Science
Title: A construction for the inverse of a Turing machine.
Author: Gips, James
Date: August 1973
Abstract: A direct construction for the inverse of a Turing machine is presented.
CS-TR-73-390
Report Number: CS-TR-73-391
Institution: Stanford University, Department of Computer Science
Title: Search strategies for the task of organic chemical synthesis.
Author: Sridharan, Natesa S.
Date: October 1973
Abstract:
A computer program has been written that successfully discovers
syntheses for complex organic chemical molecules. The definition of the
search space and strategies for heuristic search are described in this
paper.
CS-TR-73-391
Report Number: CS-TR-73-392
Institution: Stanford University, Department of Computer Science
Title: Sorting and Searching - errata and addenda.
Author: Knuth, Donald E.
Date: October 1973
Abstract:
This report lists all the typographical errors, in $underline{The Art
of Computer Programming / Volume 3}$, that are presently known to its
author. Several recent developments and references to the literature,
which will be incorporated in the second printing, are also included in
an attempt to keep the book up-to-date. Several dozen corrections to
the second (1971) printing of volume two are also included.
CS-TR-73-392
Report Number: CS-TR-73-394
Institution: Stanford University, Department of Computer Science
Title: Parallel programming: an axiomatic approach.
Author: Hoare, C. A. R.
Date: October 1973
Abstract:
This paper develops some ideas expounded in [C.A.R. Hoare. "Towards a
Theory of Parallel Programming," in $\underline{Operating Systems
Techniques}, ed. C.A.R. Hoare and R.H. Perrot. Academic Press. 1972].
It distinguishes a number of ways of using parallelism, including
disjoint processes, competition, cooperation, communication and
"colluding". In each case an axiomatic proof rule is given. Some light
is thrown on traps or ON conditions. Warning: the program structuring
methods described here are not suitable for the construction of
operating systems.
CS-TR-73-394
Report Number: CS-TR-73-396
Institution: Stanford University, Department of Computer Science
Title: The use of sensory feedback in a programmable assembly system.
Author: Bolles, Robert C.
Author: Paul, Richard P.
Date: October 1973
Abstract:
This article describes an experimental, automated assembly system which
uses sensory feedback to control an electro-mechanical arm and TV
camera. Visual, tactile, and force feedback are used to improve
positional information, guide manipulations, and perform inspections.
The system has two phases: a 'planning' phase in which the computer is
programmed to assemble some object, and a 'working' phase in which the
computer controls the arm and TV camera in actually performing the
assembly. The working phase is designed to be run on a mini-computer.
The system has been used to assemble a water pump, consisting of a
base, gasket, top, and six screws. This example is used to explain how
the sensory data is incorporated into the control system. A movie
showing the pump assembly is available from the Stanford Artificial
Intelligence Laboratory.
CS-TR-73-396
Report Number: CS-TR-73-398
Institution: Stanford University, Department of Computer Science
Title: Image contouring and comparing.
Author: Baumgart, Bruce G.
Date: October 1973
Abstract:
A contour image representation is stated and an algorithm for
converting a set of digital television images into this representation
is explained. The algorithm consists of five steps: digital image
thresholding, binary image contouring, polygon nesting, polygon
smoothing, and polygon comparing. An implementation of the algorithm is
the main routine of a program called CRE; auxiliary routines provide
cart and turn table control, TV camera input, image display, and xerox
printer output. A serendip application of CRE to type font construction
is explained. Details about the intended application of CRE to the
perception of physical objects will appear in sequels to this paper.
CS-TR-73-398
Report Number: CS-TR-73-401
Institution: Stanford University, Department of Computer Science
Title: Monitors: an operating system structuring concept.
Author: Hoare, C. A. R.
Date: November 1973
Abstract:
This paper develops Brinch-Hansen's concept of a monitor as a method of
structuring an operating system. It introduces a form of
synchronization, describes a possible method of implementation in terms
of semaphores, and gives a suitable proof rule. Illustrative examples
include a single resource scheduler, a bounded buffer, an alarm clock,
a buffer pool, a disc head optimizer, and a version of the problem of
readers and writers.
CS-TR-73-401
Report Number: CS-TR-73-403
Institution: Stanford University, Department of Computer Science
Title: Hints on programming language design.
Author: Hoare, C. A. R.
Date: December 1973
Abstract:
This paper (based on a keynote address presented at the SIGACT/SIGPLAN
Symposium on Principles of Programming Languages, Boston, October 1-3,
1973) presents the view that a programming language is a tool which
should assist the programmer in the most difficult aspects of his art,
namely program design, documentation, and debugging. It discusses the
objective criteria for evaluating a language design, and illustrates
them by application to language features of both high level languages
and machine code programming. It concludes with an annotated reading
list, recommended for all intending language designers.
CS-TR-73-403
Report Number: CS-TR-73-379
Institution: Stanford University, Department of Computer Science
Title: The hetrodyne filter as a tool for analysis of transient waveforms.
Author: Moorer, James Anderson
Date: July 1973
Abstract:
A method of analysis of transient waveforms is discussed. Its
properties and limitations are presented in the context of musical
tones. The method is shown to be useful when the risetimes of the
partials of the tone are not too short. An extension to inharmonic
partials and polyphonic musical sound is discussed.
CS-TR-73-379
Report Number: CS-TR-74-404
Institution: Stanford University, Department of Computer Science
Title: A catalog of quadri/trivalent graphs.
Author: Sridharan, Natesa S.
Date: January 1974
Abstract:
In a previous report [1973] a method for computer generation of
quadri/trivalent "vertex-graphs" was presented in detail. This report
is a catalog of 13 classes of graphs generated by using this method.
CS-TR-74-404
Report Number: CS-TR-74-405
Institution: Stanford University, Department of Computer Science
Title: Stanford Computer Science Department research report.
Author: Davis, Randall
Author: Wright, Margaret H.
Date: January 1974
Abstract:
This collection of reports is divided into two sections. The first
contains the research summaries for individual faculty members and
research associates in the Computer Science Department. Two professors
from Electrical Engineering are included as "Affiliated Faculty'
because their interests are closely related to those of the Department,
while Professors George Dantzig and Roger Schank do not appear because
they were on leave and unavailable when the summaries were prepared.
The second section gives an overview of the activities of research
groups in the Department. "Group" here is taken to imply many different
things, including people related by various degrees of intellectual
interests, physical proximity, or funding considerations. We have tried
to describe any group whose scope of interest is greater than that of
one person. The list of recent publications for each is not intended to
be comprehensive, but rather to give a feeliny for the range of topics
considered. This collection of reports has been assembled to provide a
reasonably comprehensive review of research activities in the
Department. We hope that it will be widely useful -- in particular,
students in the Department may find it helpful in discovering
interesting projects and possible thesis topics. We expect also that it
will be of interest to many other people, both within and outside the
Department.
CS-TR-74-405
Report Number: CS-TR-74-406
Institution: Stanford University, Department of Computer Science
Title: Memory model for a robot.
Author: Perkins, W. A.
Date: January 1974
Abstract:
A memory model for a robot has been designed and tested in a simple
toy-block world for which it has shown clarity, efficiency, and
generality. In a constrained pseudo-English one can ask the program to
manipulate objects and query it about the present, past, and possible
future states of its world. The program has a good understanding of its
world and gives intelligent answers in reasonably good English. Past
and hypothetical states of the world are handled by changing the state
of the world in an imaginary context. Procedures interrogate and modify
two global databases, one which contains the present representation of
the world and another which contains the past history of events,
conversations, etc. The program has the ability to create, destroy, and
even resurrect objects in its world.
CS-TR-74-406
Report Number: CS-TR-74-407
Institution: Stanford University, Department of Computer Science
Title: FAIL.
Author: Wright, F. H. G., II
Author: Gorin, Ralph E.
Date: April 1974
Abstract:
This is a reference manual for FAIL, a fast, one-pass assembler for
PDP-10 and PDP-6 machine language. FAIL statements, pseudo-operations,
macros, and conditional assembly features are described. Although FAIL
uses substantially more main memory than MACRO-10, it assembles typical
programs about five time faster. FAIL assembles the entire Stanford
time-sharing operating system (two million characters) in less than
four minutes of CPU time on a KA-10 processor. FAIL permits an
ALGOL-type block structure which provides a way of localizing the usage
of some symbols to certain parts of the program, such that the same
symbol name can be used to mean different things in different blocks.
CS-TR-74-407
Report Number: CS-TR-74-409
Institution: Stanford University, Department of Computer Science
Title: Final report: the first ten years of artificial intelligence research at Stanford.
Author: Earnest, Lester D.
Author: McCarthy, John
Author: Feigenbaum, Edward A.
Author: Lederberg, Joshua
Date: July 1973
Abstract:
The first ten years of research in artificial intelligence and related
fields at Stanford University have yielded significant results in
computer vision and control of manipulators, speech recognition,
heuristic programming, representation theory, mathematical theory of
computation, and modeling of organic chemical processes. This report
summarizes the accomplishments and provides bibliographies in each
research area.
CS-TR-74-409
Report Number: CS-TR-74-411
Institution: Stanford University, Department of Computer Science
Title: After Leibniz...: discussions on philosophy and artificial intelligence.
Author: Anderson, D. Bruce
Author: Binford, Thomas O.
Author: Thomas, Arthur J.
Author: Weyhrauch, Richard W.
Author: Wilks, Yorick A.
Date: March 1974
Abstract:
This is an edited transcript of informal conversations which we have
had over recent months, in which we looked at some of the issues which
seem to arise when artificial intelligence and philosophy meet. Our aim
was to see what might be some of the fundamental principles of attempts
to build intelligent machines. The major topics covered are the
relationship of AI and philosophy and what help they might be to each
other: the mechanisms of natural inference and deduction; the question
of what kind of theory of meaning would be involved in a successful
natural language understanding program, and the nature of models in AI
research.
CS-TR-74-411
Report Number: CS-TR-74-414
Institution: Stanford University, Department of Computer Science
Title: GEOMED - a geometric editor.
Author: Baumgart, Bruce G.
Date: May 1974
Abstract:
GEOMED is a system for doing 3-D geometric modeling; used from a
keyboard, it is an interactive drawing program; used as a package of
SAIL or LISP accessible subroutines, it is a graphics language. With
GEOMED, arbitrary polyhedra can be constructed, moved about and viewed
in perspective with hidden lines eliminated. In addition to polyhedra,
camera and image models are provided so that simulators relevant to
computer vision, problem solving, and animation may be constructed.
CS-TR-74-414
Report Number: CS-TR-74-417
Institution: Stanford University, Department of Computer Science
Title: Some thoughts on proving clean termination of programs.
Author: Sites, Richard L.
Date: May 1974
Abstract:
Proof of clean termination is a useful sub-goal in the process of
proving that a program is totally correct. Clean termination means that
the program terminates (no infinite loops) and that it does so
normally, without any execution-time semantic errors (integer overflow,
use of undefined variables, subscript out of range, etc.). In contrast
to proofs of correctness, proof of clean termination requires no
extensive annotation of a program by a human user, but the proof says
nothing about the results calculated by the program, just that whatever
it does, it terminates cleanly. Two example proofs are given, of
previously published programs: TREESORT3 by Robert Floyd, and SELECT by
Ronald L. Rivest and Robert Floyd.
CS-TR-74-417
Report Number: CS-TR-74-420
Institution: Stanford University, Department of Computer Science
Title: Partially self-checking ciruits and their use in performing logical operations.
Author: Wakerly, John F.
Date: August 1973
Abstract:
A new class of circuits called partially self-checking circuits is
described. These circuits have one mode of operation called secure mode
in which they have the properties of totally self-checking circuits;
that is, every fault is tested during normal operation and no fault can
cause an undetected error. They also have an insecure mode of operation
with the property that any fault which affects a result in insecure
mode is tested by some input in secure mode; however, undetected errors
may occur in insecure mode. One application of these circuits is in the
arithmetic and logic unit of a computer with data encoded in an
error-detecting code. While there is no code simpler than duplication
which detects single errors in logical operations such as AND and OR,
it is shown that there exist partially self-checking networks to
perform these operations. A commercially available MSI chip, the 74181
4-bit ALU, can be used in a partially self-checking network to perform
arithmetic and logical operations.
CS-TR-74-420
Report Number: CS-TR-74-423
Institution: Stanford University, Department of Computer Science
Title: Asymptotic representation of the average number of active modules in an n-way interleaved memory.
Author: Rao, Gururaj S.
Date: April 1974
Abstract:
In an n-way interleaved memory the effective bandwidth depends on the
average number of concurrently active modules. Using a model for the
memory which does not permit queueing on busy modules and which assumes
an infinite stream of calls on the modules, where the elements in the
stream occur with equal probability, the average number is a
combinatorial quantity. Hellerman has previously app oximated this
quantity by $n^{0.56}$. We show in this paper that the average number
is asymptotically equal to $sqrt{\frac{\pi n}{2}} - \frac{1}{3}$. The
method is due to Knuth and expresses the combinatorial quantity in
terms of the incomplete gamma function and its deriviatives.
CS-TR-74-423
Report Number: CS-TR-74-431
Institution: Stanford University, Department of Computer Science
Title: Pattern-matching rules for the recognition of natural language dialogue expressions.
Author: Colby, Kenneth Mark
Author: Parkison, Roger C.
Author: Faught, William S.
Date: June 1974
Abstract:
Man-machine dialogues using everyday conversational English present
problems for computer processing of natural language. Grammar-based
parsers which perform a word-by-word, parts-of-speech analysis are too
fragile to operate satisfactorily in real time intervieus allowing
unrestricted English. In constructing a simulation of paranoid thought
processes, we designed an algorithm capable of handling the linguistic
expressions used by interviewers in teletyped diagnostic psychiatric
interviews. The algorithm uses pattern-matching rules which attempt to
characterize the input expressions by progressively transforming them
into patterns uhich match, completely or fuzzily, abstract stored
patterns. The power of this approach lies in its ability to ignore
recognized and unrecognized words and still grasp the meaning of the
message. The methods utilized are general and could serve any "host"
system uhich takes natural language input.
CS-TR-74-431
Report Number: CS-TR-74-433
Institution: Stanford University, Department of Computer Science
Title: On automating the construction of programs.
Author: Buchanan, Jack R.
Author: Luckham, David C.
Date: May 1974
Abstract:
An experimental system for automatically generating certain simple
kinds of programs is described. The programs constructed are expressed
in a subset of ALGOL containing assignments, function calls,
conditional statements, while loops, and non-recursive procedure calls.
The input is an environment of primitive programs and programming
methods specified in a lnaugage currently used to define the semantics
of the output programming language. The system has been used to
generate programs for symbolic manipulation, robot control, every day
planning, and computing arithmetical functions.
CS-TR-74-433
Report Number: CS-TR-74-435
Institution: Stanford University, Department of Computer Science
Title: Balanced computer systems.
Author: Price, Thomas G.
Date: April 1974
Abstract:
We use the central server model to extend Buzen's results on balance
and bottlenecks. We develop two measures which appear to be useful for
evaluating and improving computer system performance. The first
measure, called the balance index, is useful for balancing requests to
the peripheral processors. The second quantity, called the sensitivity
index, indicates which processing rates have the most effect on overall
system performance. We define the capacity of a central server model as
the maximum throughput as we vary the peripheral processor
probabilities. We show that the reciprocal of the CPU utilization is a
convex function of the peripheral processor probabilities and that a
necessary and sufficient condition for the peripheral processor
probabilities to achieve capacity is that the balance indexes are equal
for all peripheral processors. We give a method to calculate capacity
using classical optimization techniques. Finally, we consider the
problem of balancing the processing rates of the processors. Two
conditions for "balance" are derived. The first condition maximizes our
uncertainty about the next state of the system. This condition has
several desirable properties concerning throughput, utilizations,
overlap, and resistance to changes in job mix. The second condition is
based on obtaining the most throughput for a given cost.
CS-TR-74-435
Report Number: CS-TR-74-436
Institution: Stanford University, Department of Computer Science
Title: Natural language understanding systems within the AI paradigm: a survey and some comparisons.
Author: Wilks, Yorick A.
Date: December 1974
Abstract:
The paper surveys the major projects on the understanding of natural
language that fall within what may now be called the artificial
intelligence paradigm for natural language systems. Some space is
devoted to arguing that the paradigm is now a reality and different in
significant respects from the generative paradigm of present day
linguistics. The comparisons between systems center around questions of
the relative perspicuity of procedural and static representations; the
advantages and disadvantages of developing systems over a period survey
and some comparisons.
CS-TR-74-436
Report Number: CS-TR-74-439
Institution: Stanford University, Department of Computer Science
Title: On the solution of large, structured linear complementarity problems: III.
Author: Cottle, Richard W.
Author: Golub, Gene H.
Author: Sacher, Richard S.
Date: August 1974
Abstract:
This paper addresses the problem of solving a class of
specially-structured linear complementarity problems of potentially
very large size. An efficient method which couples a modification of
the block successive overrelaxation technique and several techniques
discussed by the authors in previous papers is proposed. Problems of
the type considered arise, for example, in solving approximations to
both the free boundary problem for finite-length journal bearings and
percolation problems in porous dams by numerical methods. These
applications and our computational experience with the method are
presented here.
CS-TR-74-439
Report Number: CS-TR-74-442
Institution: Stanford University, Department of Computer Science
Title: Estimating the efficiency of backtrack programs.
Author: Knuth, Donald E.
Date: August 1974
Abstract:
One of the chief difficulties associated with the so-called
backtracking technique for combinatorial problems has been our
inability to predict the efficiency of a given algorithm, or to compare
the efficiencies of different approaches, without actually writing and
running the programs. This paper presents a simple method which
produces reasonable estimates for most applications, requiring only a
modest amount of hand calculation. The method should prove to be of
considerable utility in connection with D. H. Lehmer's branch-and-bound
approach to combinatorial optimization.
CS-TR-74-442
Report Number: CS-TR-74-444
Institution: Stanford University, Department of Computer Science
Title: Progress report on program-understanding systems.
Author: Green, C. Cordell
Author: Waldinger, Richard J.
Author: Barstow, David R.
Author: Elschlager, Robert A.
Author: Lenat, Douglas B.
Author: McCune, Brian P.
Author: Shaw, David E.
Author: Steinberg, Louis I.
Date: August 1974
Abstract:
This progress report covers the first year and one half of work by our
automatic-programming research group at the Stanford Artificial
Intelligence Laboratory. Major emphasis has been placed on methods of
program specification, codification of programming knowledge, and
implementation of pilot systems for program writing and understanding.
List processing has been used as the general problem domain for this
work.
CS-TR-74-444
Report Number: CS-TR-74-446
Institution: Stanford University, Department of Computer Science
Title: LCFsmall: an implementation of LCF.
Author: Aiello, Luigia
Author: Weyhrauch, Richard W.
Date: August 1974
Abstract:
This is a report on a computer program implementing a simplified
version of LCF. It is written (with minor exceptions) entirely in pure
LISP and has none of the user oriented features of the implementation
described by Milner. We attempt to represent directly in code the
metamathematical notions necessary to describe LCF. We hope that the
code is simple enough and the metamathematics is clear enough so that
properties of this particular program (e.g. its correctness) can
eventually be proved. The program is reproduced in full.
CS-TR-74-446
Report Number: CS-TR-74-447
Institution: Stanford University, Department of Computer Science
Title: The semantics of PASCAL in LCF.
Author: Aiello, Luigia
Author: Aiello, Mario
Author: Weyhrauch, Richard W.
Date: August 1974
Abstract:
We define a semantics for the arithmetic part of PASCAL by giving it an
interpretation in LCF, a language based on the typed
$\lambda$-calculus. Programs are represented in terms of their abstract
syntax. We show sample proofs, using LCF, of some general properties of
PASCAL and the correctness of some particular programs. A program
implementing the McCarthy Airline reservation system is proved correct.
CS-TR-74-447
Report Number: CS-TR-74-455
Institution: Stanford University, Department of Computer Science
Title: Edge-disjoint spanning trees, dominators, and depth-first search.
Author: Tarjan, Robert Endre
Date: September 1974
Abstract:
This paper presents an algorithm for finding two edge-disjoint spanning
trees rooted at a fixed vertex of a directed graph. The algorithm uses
depth-first search, an efficient method for computing disjoint set
unions, and an efficient method for computing dominators. It requires
O(V log V + E) time and O(V + E) space to analyze a graph with V
vertices and E edges.
CS-TR-74-455
Report Number: CS-TR-74-456
Institution: Stanford University, Department of Computer Science
Title: AL, a programming system for automation.
Author: Finkel, Raphael A.
Author: Taylor, Russell H.
Author: Bolles, Robert C.
Author: Paul, Richard P.
Author: Feldman, Jerome A.
Date: November 1974
Abstract:
AL is a high-level programming system for specification of manipulatory
tasks such as assembly of an object from parts. AL includes an
ALGOL-like source language, a translator for converting programs into
runnable code, and a runtime system for controlling manipulators and
other devices. The system includes advanced features for describing
individual motions of manipulators, for using sensory information, and
for describing assembly algorithms in terms of common domain-specific
primitives. This document describes the design of AL, which is
currently being implemented as a successor to the Stanford WAVE system.
CS-TR-74-456
Report Number: CS-TR-74-457
Institution: Stanford University, Department of Computer Science
Title: Ten criticisms of PARRY.
Author: Colby, Kenneth Mark
Date: September 1974
Abstract: Some major criticisms of a computer simulation of paranoid processes (PARRY) are reviewed and discussed.
CS-TR-74-457
Report Number: CS-TR-74-460
Institution: Stanford University, Department of Computer Science
Title: Random insertion into a priority queue structure.
Author: Porter, Thomas
Author: Simon, Istvan
Date: October 1974
Abstract:
The average number of levels that a new element moves up when inserted
into a heap is investigated. Two probabilistic models, under which such
an average might be computed are proposed. A "lemma of conservation of
ignorance" is formulated and used in the derivation of an exact formula
for the average in one of these models. It is shown that this average
is bounded by a constant and its asymptotic behavior is discussed.
Numerical data for the second model is also provided and analyzed.
CS-TR-74-460
Report Number: CS-TR-74-462
Institution: Stanford University, Department of Computer Science
Title: A fast, feature-driven stereo depth program.
Author: Pingle, Karl K.
Author: Thomas, Arthur J.
Date: May 1975
Abstract:
In this paper we describe a fast, feature-driven program for extracting
depth information from stereoscopic sets of digitized TV images. This
is achieved by two means: in the simplest case, by statistically
correlating variable-sized windows on the basis of visual texture, and
in the more complex case by pre-processing the images to extract
significant visual features such as corners, and then using these
features to control the correlation process. The program runs on the
PDP-10 but uses a PDP-11/45 and an SPS-41 Signal Processing Computer as
subsidiary processors. The use of the two small, fast machines for the
performance of simple but often-repeated computations effects an
increase in speed sufficient to allow us to think of using this program
as a fast 3-dimensional segmentation method, preparatory to more
complex image processing. It is also intended for use in visual
feedback tasks involved in hand-eye coordination and automated
assembly. The current program is able to calculate the
three-dimensional positions of 10 points to within 5 millimeters, using
5 seconds of computation for extracting features, 1 second per image
for correlation, and 0.1 second for the depth calculation.
CS-TR-74-462
Report Number: CS-TR-74-466
Institution: Stanford University, Department of Computer Science
Title: Recent research in artificial intelligence, heuristic programming, and network protocols.
Author: Earnest, Lester D.
Author: McCarthy, John
Author: Feigenbaum, Edward A.
Author: Lederberg, Joshua
Author: Cerf, Vinton G.
Date: July 1974
Abstract:
This is a progress report for ARPA-sponsored research projects in
computer science for the period July 1973 to July 1974. Accomplishments
are reported in artificial intelligence (especially heuristic
programming, robotics, theorem proving, automatic programming, and
natural language understanding), mathematical theory of computation,
and protocol development for computer communication networks.
References to recent publications are provided for each topic.
CS-TR-74-466
Report Number: CS-TR-74-467
Institution: Stanford University, Department of Computer Science
Title: Checking proofs in the metamathematics of first order logic.
Author: Aiello, Mario
Author: Weyhrauch, Richard W.
Date: August 1974
Abstract:
This is a report on some of the first experiments of any size carried
out using the new first order proof checker FOL. We present two
different first order axiomatizations of the metamathematics of the
logic which FOL itself checks and show several proofs using each one.
The difference between the axiomatizations is that one defines the
metamathematics in a many sorted logic, the other does not.
CS-TR-74-467
Report Number: CS-TR-74-468
Institution: Stanford University, Department of Computer Science
Title: A combinatorial base for some optimal matroid intersection algorithms.
Author: Krogdahl, Stein
Date: November 1974
Abstract:
E. Lawler has given an algorithm for finding maximum weight
intersections for a pair of matroids, using linear programming concepts
and constructions to prove its correctness. In this paper another
theoretical base for this algorithm is given which depends only on the
basic properties of matroids, and which involves no linear programming
concepts.
CS-TR-74-468
Report Number: CS-TR-74-469
Institution: Stanford University, Department of Computer Science
Title: Molecular structure elucidation III.
Author: Brown, Harold
Date: December 1974
Abstract:
A computer implemented algorithm to solve the following graph
theoretical problem is presented: given the empirical formula for a
molecule and one or more non-overlapping substructural fragments of the
molecule, determine all the distinct molecular structures based on the
formula and containing the fragments. That is, given a degree sequence
of labeled nodes and one or more connected multigraphs, determine a
representative set of the isomorphism classes of the connected
multigraphs based on the degree sequence and containing the given
multi-graphs as non-overlapping subgraphs.
CS-TR-74-469
Report Number: CS-TR-74-470
Institution: Stanford University, Department of Computer Science
Title: Stable sorting and merging with optimal space and time bounds.
Author: Trabb-Pardo, Luis I.
Date: December 1974
Abstract:
This work introduces two algorithms for stable merging and stable
sorting of files. The algorithms have optimal worst case time bounds,
the merge is linear and the sort is of order n log n. Extra storage
requirements are also optimal, since both algorithms make use of a
fixed number of pointers. Files are handled only by means of the
primitives exchange and comparison of records and basic pointer
transformations.
CS-TR-74-470
Report Number: CS-TR-74-471
Institution: Stanford University, Department of Computer Science
Title: The interaction of inferences, affects, and intentions, in a model of paranoia.
Author: Faught, William S.
Author: Colby, Kenneth Mark
Author: Parkison, Roger C.
Date: December 1974
Abstract:
The analysis of natural language input into its underlying semantic
content is but one of the tasks necessary for a system (human or
non-human) to use natural language. Responding to natural language
input requires performing a number of tasks: 1) deriving facts about
the input and the situation in which it was spoken; 2) attending to the
system's needs, desires, and interests; 3) choosing intentions to
fulfill these interests; 4) deriving and executing actions from these
intentions. We describe a series of processes in a model of paranoia
which performs these tasks. We also describe the modifications made by
the paranoid processes to the normal processes. A computer program has
been constructed to testst this theory.
CS-TR-74-471
Report Number: CS-TR-74-472
Institution: Stanford University, Department of Computer Science
Title: Stanford automatic photogrammetry research.
Author: Quam, Lynn H.
Author: Hannah, Marsha Jo
Date: December 1974
Abstract:
This report documents the feasiblity study done at Stanford
University's Artificial Intelligence Laboratory on the problem of
computer automated aerial/orbital photogrammetry. The techniques
investigated were based on correlation matching of small areas in
digitized pairs of stereo images taken from high altitude or planetary
orbit, with the objective of deriving a 3-dimensional model for the
surface of a planet.
CS-TR-74-472
Report Number: CS-TR-74-473
Institution: Stanford University, Department of Computer Science
Title: Automatic program verification II: verifying programs by algebraic and logical reduction.
Author: Suzuki, Norihisa
Date: December 1974
Abstract:
Methods for verifying programs written in a higher level programming
language are devised and implemented. The system can verify programs
written in a subset of PASCAL, which may have data structures and
control structures such as WHILE, REPEAT, FOR, PROCEDURE, FUNCTION and
COROUTINE. The process of creation of verification conditions is an
extension of the work done by Igarashi, London and Luckham which is
based on the deductive theory by Hoare. Verification conditions are
proved using specialized simplification and proof techniques, which
consist of an arithmetic simplifier, equality replacement rules, fast
algorithm for simplifying formulas using propositional truth value
evaluation, and a depth first proof search process. The basis of
deduction mechanism used in this prover is Gentzen-type formal system.
Several sorting programs including Floyd's TREESORT3 and Hoare's FIND
are verified. It is shown that the resulting array is not only
well-ordered but also a permutation of the input array.
CS-TR-74-473
Report Number: CS-TR-74-474
Institution: Stanford University, Department of Computer Science
Title: Automatic program verification III: a methodology for verifying programs.
Author: von Henke, Friedrich W.
Author: Luckham, David C.
Date: December 1974
Abstract:
The paper investigates methods for applying an on-line interactive
verification system designed to prove properties of PASCAL programs.
The methodology is intended to provide techniques for developing a
debugged and verified version starting from a program, that (a) is
possibly unfinished in some respects, (b) may not satisfy the given
specifications, e.g., may contain bugs, (c) may have incomplete
documentation, (d) may be written in non-standard ways, e.g., may
depend on user-defined data structures. The methodology involves (i)
interactive application of a verification condition generator, an
algebraic simplifier and a theorem-prover; (ii) techniques for
describlng data structures, type constraints, and properties of
programs and subprograms (i.e. lower level procedures); (iii) the use
of (abstract) data types in structuring programs and proofs. Within
each unit (i.e. segment of a problem), the interactive use is aimed at
reduclng verification conditions to manageable proportions so that the
non-trivial factors may be analysed. Analysis of verification
conditions attempts to localize errors in the program logic, to extend
assertions inside the program, to spotlight additional assumptions on
program subfunctions (beyond those already specified by the
programmer), and to generate appropriate lemmas that allow a
verification to be completed. Methods for structuring correctness
proofs are discussed that are similar to those of "structured
programming". A detailed case study of a pattern matching algorithm
illustrating the various aspects of the methodology (including the role
played by the user) is given.
CS-TR-74-474
Report Number: CS-TR-75-476
Institution: Stanford University, Department of Computer Science
Title: A hypothetical dialogue exhibiting a knowledge base for a program-understanding system.
Author: Green, C. Cordell
Author: Barstow, David R.
Date: January 1975
Abstract:
A hypothetical dialogue with a fictitious program-understanding system
is presented. In the interactive dialogue the computer carries out a
detailed synthesis of a simple insertion sort program for linked lists.
The content, length and complexity of the dialogue reflect the
underlying programming knowledge which would be required for a system
to accomplish this task. The nature of the knowledge is discussed and
the codification of such programming knowledge is suggested as a major
research area in the development of program-understanding systems.
CS-TR-75-476
Report Number: CS-TR-75-477
Institution: Stanford University, Department of Computer Science
Title: Longest common subsequences of two random sequences.
Author: Chvatal, Vaclav
Author: Sankoff, David
Date: January 1975
Abstract:
Given two random k-ary sequences of length n, what is f(n,k), the
expected length of their longest common subsequence? This problem
arises in the study of molecular evolution. We calculate f(n,k) for all
k, where n $\leq$ 5 , and f(n,2) where n $\leq$ 10. We study the
limiting behavior of $n^{-1}$f(n,k) and derive upper and lower bounds
on these limits for all k. Finally we estimate by Monte-Carlo methods
f(100,k), f(1000,2) and f(5000,2).
CS-TR-75-477
Report Number: CS-TR-75-478
Institution: Stanford University, Department of Computer Science
Title: Ill-conditioned eigensystems and the computation of the Jordan canonical form.
Author: Golub, Gene H.
Author: Wilkinson, James H.
Date: February 1975
Abstract:
The solution of the complete eigenvalue problem for a non-normal matrix
A presents severe practical difficulties when A is defective or close
to a defective matrix. However in the presence of rounding errors one
cannot even determine whether or not a matrix is defective. Several of
the more stable methods for computing the Jordan canonical form are
discussed together with the alternative approach of computing
well-defined bases (usually orthogonal) of the relevant invariant
subspaces.
CS-TR-75-478
Report Number: CS-TR-75-479
Institution: Stanford University, Department of Computer Science
Title: Error bounds in the approximation of eigenvalues of differential and integral operators.
Author: Chatelin, Francois
Author: Lemordant, J.
Date: February 1975
Abstract:
Various methods of approximating the eigenvalues and invariant
subspaces of nonself-adjoint differential and integral operators are
unified in a general theory. Error bounds are given, from which most of
the error bounds in the literature can be derived.
CS-TR-75-479
Report Number: CS-TR-75-481
Institution: Stanford University, Department of Computer Science
Title: Hybrid difference methods for the initial boundary-value problem for hyperbolic equations.
Author: Oliger, Joseph E.
Date: February 1975
Abstract:
The use of lower order approximations in the neighborhood of boundaries
coupled with higher order interior approximations is examined for the
mixed initial boundary-value problem for hyperbolic partial
differential equations. Uniform error can be maintained using smaller
grid intervals with the lower order approximations near the boundaries.
Stability results are presented for approximations to the initial
boundary-value problem for the model equation $u_t$ + ${cu}_x$ = O
which are fourth order in space and second order in time in the
interior and second order in both space and time near the boundaries.
These results are generalized to a class of methods of this type for
hyperbolic systems. Computational results are presented and comparisons
are made with other methods.
CS-TR-75-481
Report Number: CS-TR-75-482
Institution: Stanford University, Department of Computer Science
Title: An Algroithm for Finding Best Matches in Logarithmic Expected Time
Author: Friedman, Jerome
Author: Bentley, Jon Louis
Author: Finkel, Raphael Ari
Date: July 1976
Abstract:
An algorithm and data structure are presented forsearching a file
containing N records, each described by k realvalued keys, for the m
closest matches or nearest neighbors toa given query record. The
computation required to organize the file is proportional to kNlogN.
The expected number of recordsexamined in each search is independent of
the file size. Theexpected computation to perform each search is
proportional to logN.Empirical evidence suggests that except for very
small files, thisalgorithm is considerably faster than other methods.
CS-TR-75-482
Report Number: CS-TR-75-483
Institution: Stanford University, Department of Computer Science
Title: On packing squares with equal squares.
Author: Erdoes, Paul
Author: Graham, Ronald L.
Date: March 1975
Abstract:
The following problem arises in connection with certain
multi-dimensional stock cutting problems: How many non-overlapping open
unit squares may be packed into a large square of side $\alpha$? Of
course, if $\alpha$ is a positive integer, it is trivial to see that
unit squares ean be successfully packed. However, if $\alpha$ is not an
integer, the problem beeomes much more complicated. Intuitively, one
feels that for $\alpha$ = N + 1/100, say, (where N is an integer), one
should pack $N^2$ unit squares in the obvious way and surrender the
uncovered border area (which is about $\alpha$/50) as unusable waste.
After all, how could it help to place the unit squares at all sorts of
various skew angles? In this note, we show how it helps. In particular,
we prove that we can always keep the amount of uncovered area down to
at most proportional to ${\alpha}^{7/11}$, which for large $\alpha$ is
much less than the linear waste produced by the "natural" packing above.
CS-TR-75-483
Report Number: CS-TR-75-484
Institution: Stanford University, Department of Computer Science
Title: On subgraph number independence in trees.
Author: Graham, Ronald L.
Author: Szemeredi, Endre
Date: March 1975
Abstract:
For finite graphs F and G, let $N_F$(G) denote the number of
occurrences of F in G, i.e., the number of subgraphs of G which are
isomorphic to F. If ${\cal F}$ and ${\cal G}$ are families of graphs,
it is natural to ask them whether or not the quantities $N_F$(G), $F
\in {\cal F}$, are linearly independent when G is restricted to ${\cal
G}$. For example, if ${\cal F}$ = {$K_1$,$K_2$} (where $K_n$ denotes
the complete graph on n vertices) and ${\cal G}$ is the family of all
(finite) $\underline{trees}$ then of course $N_{K_{1}}$(T) -
$N_{K_{2}}$(T) = 1 for all $T \in {\cal G}$. Slightly less trivially,
if ${\cal F} = {$S_n$: n = 1,2,3,...} (where $S_n$ denotes the
$\underline{star}$ on n edges) and ${\cal G}$ again is the family of
all trees then $\sum_{n-1}^{\infty} {(-1)}^{n+1} N_{S_{n}} (T) = 1 for
all T \in {\cal G}$. It will be proved that such a linear dependence
can $\underline{never}$ occur if ${\cal F}$ is finite, no $F \in {\cal
F}$ has an isolated point and ${\cal G}$ contains all trees. This
result has important applications in recent work of L. Lovasz and one
of the authors.
CS-TR-75-484
Report Number: CS-TR-75-485
Institution: Stanford University, Department of Computer Science
Title: On multiplicative representations of integers.
Author: Erdoes, Paul
Author: Szemeredi, Endre
Date: March 1975
Abstract:
In 1969 it was shown by P. Erdoes that if 0 < $a_1$ < $a_2$ <
... < $a_k \leq x$ is a sequence of integers for which the products
$a_i a_j$ are all distinct then the maximum possible value of k
satisfies $\pi$(x) + $c_2$ $x^{3/4}$/${(log x)}^{3/2}$ < max k <
$\pi$(x) + $c_1$ $x^{3/4}$/$(log x)^{3/2}$ where $\pi$(x) denotes the
number of primes not exceeding x and $c_1$ and $c_2$ are absolute
constants. In this paper we will be concerned with similar results of
the following type. Suppose 0 < $a_1$ < ... < $a_k \leq x$, 0
< $b_1$ < ... < $b_{\ell} \leq x$ are sequences of integers.
Let g(n) denote the number of representations of n in the form $a_i
b_j$. Then we prove: (i) If g(n) $\leq$ 1 for all n then for some
constant $c_3$, k$\ell$ < $c_3 x^2$/log x. (ii) For every c there is
an f(c) so that if g(n) $\leq$ c for all n then for some constant
$c_4$, k$\ell$ < $c_4 x^2$/log x ${(log log x)}^{f(c)}.
CS-TR-75-485
Report Number: CS-TR-75-486
Institution: Stanford University, Department of Computer Science
Title: Eigenproblems for matrices associated with periodic boundary conditions.
Author: Bjorck, Ake
Author: Golub, Gene H.
Date: March 1975
Abstract:
A survey of algorithms for solving the eigenproblem for a class of
matrices of nearly tridiagonal form is given. These matrices arise from
eigenvalue problems for differentia1 equations where the solution is
subject to periodic boundary conditions. Algorithms both for computing
selected eigenvalues and eigenvectors and for solving the complete
eigenvalue problem are discussed.
CS-TR-75-486
Report Number: CS-TR-75-488
Institution: Stanford University, Department of Computer Science
Title: On complete subgraphs of r-chromatic graphs.
Author: Bollabas, Bela
Author: Erdoes, Paul
Author: Szemeredi, Endre
Date: April 1975
Abstract:
Denote by G(p,q) a graph of p vertices and q edges. $K_r$ =
G(r,($(^{r}_{2}$)) is the complete graph with r vertices and $K_r$(t)
is the complete r chromatic (i.e., r-partite) graph with t vertices in
each color class. $G_r$(n) denotes an r-chromatic graph, and
$\delta$(G) is the minimal degree of a vertex of graph G. Furthermore
denote by $f_r$(n) the smalleest integer so that every $G_r$(n) with
$\delta${$G_r$(n)) > $f_r$(n) contains a $K_r$. It is easy to see
that $\lim_{n \rightarrow \infty} f_r$(n)/n = $c_r$ exists. We show
that $c_4 \geq$ 2 + 1/9 and $c_r \geq$ r-2 + 1/2 - $\frac{1}{2(r-2)}$
for r > 4. We prove that if $\delta${$G_3$(n)) $\geq$ n+t then G
contains at least $t^3$ triangles but does not have to contain more
than 4$t^3$ of them.
CS-TR-75-488
Report Number: CS-TR-75-489
Institution: Stanford University, Department of Computer Science
Title: Regular partitions of graphs.
Author: Szemeredi, Endre
Date: April 1975
Abstract:
A crucial lemma in recent work of the author (showing that k-term
arithmetic progression-free sets of integers must have density zero)
stated (approximately) that any large bipartite graph can be decomposed
into relatively few "nearly regular" bipartite subgraphs. In this note
we generalize this result to arbitrary graphs, at the same time
strengthening and simplifying the original bipartite result.
CS-TR-75-489
Report Number: CS-TR-75-490
Institution: Stanford University, Department of Computer Science
Title: Numerical experiments with the spectral test.
Author: Gosper, R. William
Date: May 1975
Abstract:
Following Marsaglia and Dieter, the spectral test for linear
congruential random number generators is developed from the grid or
lattice point model rather than the Fourier transform model. Several
modifications to the published algorithms were tried. One of these
refinements, which uses results from lesser dimensions to compute
higher dimensional ones, was found to decrease the computation time
substantially. A change in the definition of the spectral test is
proposed in the section entitled "A Question of Independence."
CS-TR-75-490
Report Number: CS-TR-75-493
Institution: Stanford University, Department of Computer Science
Title: Describing automata in terms of languages associated with their peripheral devices.
Author: Kurki-Suonio, Reino
Date: May 1975
Abstract:
A unified approach is presented to deal with automata having different
kinds of peripheral devices. This approach is applied to pushdown
automata and Turing machines, leading to elementary proofs of several
well-known theorems concerning transductions, relationship between
pushdown automata and context-free languages, as well as homomorphic
characterization and undecidability questions. In general, this
approach leads to homomorphic characterization of language families
generated by a single language by finite transduction.
CS-TR-75-493
Report Number: CS-TR-75-498
Institution: Stanford University, Department of Computer Science
Title: Automatically Proving the Correctness of Translations Involving Optimized Code
Author: Samet, Hanan
Date: May 1975
Abstract: A formalism is described for proving that programs written in a higher level language are correctly translated to assembly language. In order to demonstrate the validity of the formalism a system has been designed and implemented for proving that programs written in a subset of LISP 1.6 as the high level language are correctly translated to LAP (an assembly language for the PDP-10) as the low level language. This work involves the identification of critical semantic properties of the language and their interrelationship to the instruction repertoire of the computer executing these programs. A primary use of the system is as a postoptimization step in code generation as well as a compiler debugger.
The assembly language programs need not have been generated by a compiler and in fact may be handcoded. The primary restrictions on the assembly language programs relate to calling sequences and well-formedness. The assembly language programs are processed by a program understanding system which simulates their effect and returns as its result a representation of the program in the form of a tree.
The proof procedure is independent of the intermediary mechanism which translates the high level language into the low level language. A proof consists of applying valid transformations to show the equivalence of the forms corresponding to the assembly language program and the original higher level language program, for which there also exists a tree-like intermediate form.
Some interesting results include the ability to handle programs where recursion is implemented by bypassing the start of the program, the detection and pinpointing of a wide class of errors in the assembly language programs, and a deeper understanding of the question of how to deal automatically with translations between high and extremely low level languages.
This disseration was submitted to the Department of Computer Science and the Committee on Graduate Studies of Stanford University in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
CS-TR-75-498
Report Number: CS-TR-75-500
Institution: Stanford University, Department of Computer Science
Title: Towards better structured definitions of programming languages.
Author: Kurki-Suonio, Reino
Date: September 1975
Abstract:
The use of abstract syntax and a behavioral model is discussed from the
viewpoint of structuring the complexity in definitions of programming
languages. A formalism for abstract syntax is presented which reflects
the possibility of having one defining occurrence and an arbitrary
number of applied occurrences of objects. Attributes can be associated
with such a syntax for restricting the set of objects generated, and
for defining character string representations and semantic
interpretations for the objects. A system of co-operating automata,
described by another abstract syntax, is proposed as a behavioral model
for semantic definition.
CS-TR-75-500
Report Number: CS-TR-75-501
Institution: Stanford University, Department of Computer Science
Title: Procedural events as software interrupts.
Author: Pettersen, Odd
Date: June 1975
Abstract:
The paper deals with procedural events, providing a basis for
synchronization and scheduling, particularly applied on real-time
program systems of multiple parallel activities ("multi-task"). There
is a great need for convenient scheduling mechanisms for minicomputer
systems as used in process control, but so far mechanisms somewhat
similar to those proposed here are found only in PL/I among the
generally known high-level languages. PL/I, however, is not very common
on computers of this size. Also, the mechanisms in PL/I seem more
restricted, as campared to those proposed here. A new type of boolean
program variable, the EVENTMARK, is proposed. Eventmarks represent
events of any kind that may occur within a computational process and
are believed to give very efficient and convenient activation and
scheduling of program modules in a real-time system. An eventmark is
declared similar to a procedure, and the proposed feature could easily
be amended as an extension to existing languages, as well as
incorporated in future language designs.
CS-TR-75-501
Report Number: CS-TR-75-502
Institution: Stanford University, Department of Computer Science
Title: Synchronization of concurrent processes.
Author: Pettersen, Odd
Date: July 1975
Abstract:
The paper gives an overview of commonly used synchronization primitives
and literature, and presents a new form of primitive expressing
conditional critical regions. A new solution is presented to the
problem of "readers and writers", utilizing the proposed
synchronization primitive. The solution is simpler and shorter than
other known algorithms. The first sections of the paper give a tutorial
introduction into established methods, in order to provide a suitable
background for the remaining parts.
CS-TR-75-502
Report Number: CS-TR-75-503
Institution: Stanford University, Department of Computer Science
Title: The macro processing system STAGE2: transfer of comments to the generated text.
Author: Pettersen, Odd
Date: July 1975
Abstract:
This paper is a short description of a small extension of STAGE2,
providing possibilities to copy comments etc. from the source text to
the generated text. The description presupposes familiarity with the
STAGE2 system: its purpose, use and descriptions. Only section 3 of
this paper requires knowledge of the internal structures and working of
the system, and that section is unnecessary for the plain use of the
described feature. The extension, if not used, is completely invisible
to the user: No rules, as described in the original literature, are
changed. A user, unaware of the extension, will see no difference from
the original version.
CS-TR-75-503
Report Number: CS-TR-75-504
Institution: Stanford University, Department of Computer Science
Title: On sparse graphs with dense long paths.
Author: Erdoes, Paul
Author: Graham, Ronald L.
Author: Szemeredi, Endre
Date: September 1975
Abstract:
The following problem was raised by H.-J. Stoss in connection with
certain questions related to the complexity of Boolean functions. An
acyclic directed graph G is said to have property P(m,n) if for any set
X of m vertices of G, there is a directed path of length n in G which
does not intersect X. Let f(m,n) denote the minimum number of edges a
graph with porperty P(m,n) can have. The problem is to estimate f(m,n).
For the remainder of the paper, we shall restrict ourselves to the case
m = n. We shall prove (1) $c_1$n log n/log log n < f(n,n) <
$c_2$n log n (where $c_1$,$c_2$,..., will hereafter denote suitable
positive constaints). In fact, the graph we construct in order to
establish the upper bound on f(n,n) in (1) will have just $c_3$n
vertices. In this case the upper bound in (1) is essentially best
possible since it will also be shown that for $c_4$ sufficiently large,
every graph on $c_4$n vertices having property P(n,n) must have at
least $c_5$n log n edges.
CS-TR-75-504
Report Number: CS-TR-75-505
Institution: Stanford University, Department of Computer Science
Title: Some linear programming aspects of combinatorics.
Author: Chvatal, Vaclav
Date: September 1975
Abstract:
This is the text of a lecture given at the Conference on Algebraic
Aspects of Combinatorics at the University of Toronto in January 1975.
The lecture was expository, aimed at an audience with no previous
knowledge of linear programming.
CS-TR-75-505
Report Number: CS-TR-75-506
Institution: Stanford University, Department of Computer Science
Title: Operational reasoning and denotational semantics.
Author: Gordon, Michael J. C.
Date: August 1975
Abstract:
"Obviously true" properties of programs can be hard to prove when
meanings are specified with a denotational semantics. One cause of this
is that such a semantics usually abstracts away from the running
process - thus properties which are obvious when one thinks about this
lose the basis of their obviousness in the absence of it. To enable
process-based intuitions to be used in constructing proofs one can
associate with the semantics an abstract interpreter so that reasoning
about the semantics can be done by reasoning about computations on the
interpreter. This technique is used to prove several facts about a
semantics of pure LISP. First a denotatlonal semantics and an abstract
interpreter are described. Then it is shown that the denotation of any
LISP form is correctly computed by the interpreter. This is used to
justify an inference rule - called "LlSP-induction" - which formalises
induction on the size of computations on the interpreter. Finally
LlSP-induction is used to prove a number of results. In particular it
is shown that the function eval is correct relative to the semantics -
i.e. that it denotes a mapping which maps forms (coded as
S-expressions) on to their correct values.
CS-TR-75-506
Report Number: CS-TR-75-507
Institution: Stanford University, Department of Computer Science
Title: Towards a semantic theory of dynamic binding.
Author: Gordon, Michael J. C.
Date: August 1975
Abstract:
The results in this paper contribute to the formulation of a semantic
theory of dynamic binding (fluid variables). The axioms and theorems
are language independent in that they don't talk about programs - i.e,
syntactic objects - but just about elements in certain domains. Firstly
the equivalence (in the circumstances where it's true) of "tying a
knot" through the environment (elaborated in the paper) and taking a
least fixed point is shown. This is central in proving the correctness
of LISP "eval" type interpreters. Secondly the relation which must hold
between two environments if a program is to have the same meaning in
both is established. It is shown how the theory can be applied to LISP
to yield previously known facts.
CS-TR-75-507
Report Number: CS-TR-75-508
Institution: Stanford University, Department of Computer Science
Title: On computing the transitive closure of a relation.
Author: Eve, James
Date: September 1975
Abstract:
An algorithm is presented for computing the transitive closure of an
arbitrary relation which is based upon a variant of Tarjan's algorithm
[1972] for finding the strongly connected components of a directed
graph. This variant leads to a more compact statement of Tarjan's
algorithm. If V is the number of vertices in the directed graph
representing the relation then the worst case behavior of the proposed
algorithm involves O($V^3$) operations. In this respect it is inferior
to existing algorithms which require O($V^3$/log V) and O($V^{{log}_2
7}$ log V) operations respectively. The best case behavior involves
only O($V^2$) operations.
CS-TR-75-508
Report Number: CS-TR-75-509
Institution: Stanford University, Department of Computer Science
Title: Finding the maximal incidence matrix of a large graph.
Author: Overton, Michael L.
Author: Proskurowski, Andrzej
Date: September 1975
Abstract:
This paper deals with the computation of two canonical representations
of a graph. A computer program is presented which searches for "the
maximal incidence matrix" of a large connected graph without multiple
edges or self-loops. The use of appropriate algorithms and data
structures is discussed.
CS-TR-75-509
Report Number: CS-TR-75-511
Institution: Stanford University, Department of Computer Science
Title: Software implementation of a new method of combinatorial hashing.
Author: Dubost, Pierre
Author: Trousse, Jean-Michel
Date: September 1975
Abstract:
This is a study of the software implementation of a new method of
searching with retrieval on secondary keys. A new family of partial
match file designs is presented, the 'worst case' is determined, a
detailed algorithm and program are given and the average execution time
is studied.
CS-TR-75-511
Report Number: CS-TR-75-512
Institution: Stanford University, Department of Computer Science
Title: Applications of path compression on balanced trees.
Author: Tarjan, Robert Endre
Date: August 1975
Abstract:
We devise a method for computing functions defined on paths in trees.
The method is based on tree manipulation techniques first used for
efficiently representing equivalence relations. It has an almost-linear
running time. We apply the method to give O(m $\alpha$(m,n)) algorithms
for two problems. A. Verifying a minimum spanning tree in an undirected
graph (best previous bound: O(m log log n) ). B. Finding dominators in
a directed graph (best previous bound: O(n log n + m) ). Here n is the
number of vertices and m the number of edges in the problem graph, and
$\alpha$(m,n) is a very slowly growing function which is related to a
functional inverse of Ackermann's function. The method is also useful
for solving, in O(m $\alpha$(m,n)) time, certain kinds of pathfinding
problems on reducible graphs. Such problems occur in global flow
analysis of computer programs and in other contexts. A companion paper
will discuss this application.
CS-TR-75-512
Report Number: CS-TR-75-513
Institution: Stanford University, Department of Computer Science
Title: A survey of techniques for fixed radius near neighbor searching.
Author: Bentley, Jon Louis
Date: August 1975
Abstract:
This paper is a survey of techniques used for searching in a
multidimensional space. Though we consider specifically the problem of
searching for fixed radius near neighbors (that is, all points within a
fixed distance of a given point), the structures presented here are
applicable to many different search problems in multidimensional
spaces. The orientation of this paper is practical; no theoretical
results are presented. Many areas open for further research are
mentioned.
CS-TR-75-513
Report Number: CS-TR-75-514
Institution: Stanford University, Department of Computer Science
Title: A microprogram control unit based on a tree memory.
Author: Tokura, Nobuki
Date: August 1975
Abstract:
A modularized control unit for microprocessors is proposed that
implements ancestor tree programs. This leads to a reduction of storage
required for address information. The basic architecture is extended to
paged tree memory to enhance the memory space usage. Finally, the
concept of an ancestor tree with shared subtrees is introduced, and the
existence of an efficient algorithm to find sharable subtrees is shown.
CS-TR-75-514
Report Number: CS-TR-75-517
Institution: Stanford University, Department of Computer Science
Title: Distances in orientations of graphs.
Author: Chvatal, Vaclav
Author: Thomassen, Carsten
Date: August 1975
Abstract:
We prove that there is a function h(k) such that every undirected graph
G admits an orientation H with the following property: if an edge uv
belongs to a cycle of length k in G, then uv or vu belongs to a
directed cycle of length at most h(k) in H. Next, we show that every
undirected bridgeless graph of radius r admits an orientation of radius
at most $R^2$+r, and this bound is best possible. We consider the same
problem with radius replaced by diameter. Finally, we show that the
problem of deciding whether an undirected graph admits an orientation
of diameter (resp. radius) two belongs to a class of problems called
NP-hard.
CS-TR-75-517
Report Number: CS-TR-75-518
Institution: Stanford University, Department of Computer Science
Title: Aggregation of inequalities in integer programming.
Author: Chvatal, Vaclav
Author: Hammer, Peter L.
Date: August 1975
Abstract:
Given an m $\times$ n zero-one matrix $\underset\tilde\to A$ we ask
whether there is a single linear inequality $\underset\tilde\to a
\underset\tilde\to x \leq b$ whose zero-one solutions are precisely the
zero-one solutions of $\underset\tilde\to A \underset\tilde\to x \leq
e$. We develop an algorithm for answering this question in O(m$n^2$)
steps and investigate other related problems. Our results may be
interpreted in terms of graph theory and threshold logic.
CS-TR-75-518
Report Number: CS-TR-75-520
Institution: Stanford University, Department of Computer Science
Title: On the representation of data structures in LCF with applications to program generation.
Author: von Henke, Friedrich W.
Date: September 1975
Abstract:
In this paper we discuss techniques of exploiting the obvious
relationship between program structure and data structure for program
generation. We develop methods of program specification that are
derived from a representation of recursive data structures in the Logic
for Computable Functions (LCF). As a step towards a formal problem
specification language we define definitional extensions of LCF. These
include a calculus for (computable) homogeneous sets and restricted
quantification. Concepts that are obtained by interpreting data types
as algebras are used to derive function definition schemes from an LCF
term representing a data structure; they also lead to techniques for
the simplification of expressions in the extended language. The
specification methods are illustrated with a detailed example.
CS-TR-75-520
Report Number: CS-TR-75-521
Institution: Stanford University, Department of Computer Science
Title: Depth perception in stereo computer vision.
Author: Thompson, Clark
Date: October 1975
Abstract:
This report describes a stereo vision approach to depth perception; the
author has build upon a set of programs that decompose the problem in
the following way: 1) Production of a camera model: the position and
orientation of the cameras in 3-space. 2) Generation of matching
point-pairs: loci of corresponding features in the two pictures. 3)
Computation of the point in 3-space for each point-pair. 4)
Presentation of the resultant depth information.
CS-TR-75-521
Report Number: CS-TR-75-522
Institution: Stanford University, Department of Computer Science
Title: Automatic program verification IV: proof of termination within a weak logic of programs.
Author: Luckham, David C.
Author: Suzuki, Norihisa
Date: October 1975
Abstract:
A weak logic of programs is a formal system in which statements that
mean "the program halts" cannot be expressed. In order to prove
termination, we would usually have to use a stronger logical system. In
this paper we show how we can prove termination of both iterative and
recursive programs within a weak logic by adding pieces of code and
placing restrictions on loop invariants and entry conditions. Thus,
most of the existing verifiers which are based on a weak logic of
programs can be used to prove termination of programs without any
modification. We give examples of proofs of termination and of accurate
bounds on computation time that were obtained using the Stanford Pascal
program verifier.
CS-TR-75-522
Report Number: CS-TR-75-523
Institution: Stanford University, Department of Computer Science
Title: BAIL: a debugger for SAIL.
Author: Reiser, John F.
Date: October 1975
Abstract:
BAIL is a debugging aid for SAIL programs, where SAIL is an extended
dialect of ALGOL60 which runs on the PDP-10 computer. BAIL consists of
a breakpoint package and an expression interpreter which allow the user
to stop his program at selected points, examine and change the values
of variables, and evaluate general SAIL expressions. In addition, BAIL
can display text from the source file corresponding to the current
location in the program. In may respects BAIL is like DDT or RAID,
except that BAIL is oriented towards SAIL and knows about SAIL data
types, primitive operations, and procedure implementation.
CS-TR-75-523
Report Number: CS-TR-75-526
Institution: Stanford University, Department of Computer Science
Title: Graph theory and Gaussian elimination.
Author: Tarjan, Robert Endre
Date: November 1975
Abstract:
This paper surveys graph-theoretic ideas which apply to the problem of
solving a sparse system of linear equations by Gaussian elimination.
Included are a discussion of bandwidth, profile, and general sparse
elimination schemes, and of two graph-theoretic partitioning methods.
Algorithms based on these ideas are presented.
CS-TR-75-526
Report Number: CS-TR-75-527
Institution: Stanford University, Department of Computer Science
Title: Center for Reliable Computing: current research.
Author: McCluskey, Edward J.
Author: Wakerly, John F.
Author: Ogus, Roy C.
Date: October 1975
Abstract:
This report summarizes the research work which has been performed, and
is currently active in the Center for Reliable Computing in the Digital
Systems Laboratory, Stanford University.
CS-TR-75-527
Report Number: CS-TR-75-528
Institution: Stanford University, Department of Computer Science
Title: Solving path problems on directed graphs.
Author: Tarjan, Robert Endre
Date: October 1975
Abstract:
This paper considers path problems on directed graphs which are
solvable by a method similar to Gaussian elimination. The paper gives
an axiom system for such problems which is a weakening of Salomaa's
axioms for a regular algebra. The paper presents a general solution
method which requires O($n^3$) time for dense graphs with n vertices
and considerably less time for sparse graphs. The paper also presents a
decomposition method which solves a path problem by breaking it into
subproblems, solving each subproblem by elimination, and combining the
solutions. This method is a generalization of the "reducibility" notion
of data flow analysis, and is a kind of single-element "tearing".
Efficiently implemented, the method requires O(m $\alpha$(m,n)) time
plus time to solve the subproblems, for problem graphs with n vertices
and m edges. Here $\alpha$(m,n) is a very slowly growing function which
is a functional inverse of Ackermann's function. The paper considers
variants of the axiom system for which the solution methods still work,
and presents several applications including solving simultaneous linear
equations and analyzing control flow in computer programs.
CS-TR-75-528
Report Number: CS-TR-75-530
Institution: Stanford University, Department of Computer Science
Title: An adaptive finite difference solver for nonlinear two point boundary problems with mild boundary layers.
Author: Lentini, M.
Author: Pereyra, Victor
Date: November 1975
Abstract:
A variable order variable step finite difference algorithm for
approximately solving m-dimensional systems of the form y' = f(t,y), t
$\in$ [a,b] subject to the nonlinear boundary conditions g(y(a),y(b)) =
0 is presented. A program, PASVAR, implementing these ideas has been
written and the results on several test runs are presented together
with comparisons with other methods. The main features of the new
procedure are: a) Its ability to produce very precise global error
estimates, which in turn allow a very fine control between desired
tolerance and actual output precision. b) Non-uniform meshes allow an
economical and accurate treatment of boundary layers and other sharp
changes in the solutions. c) The combination of automatic variable
order (via deferred corrections) and automatic (adaptive) mesh
selection produces, as in the case of initial value problem solvers, a
versatile, robust, and efficient algorithm.
CS-TR-75-530
Report Number: CS-TR-75-531
Institution: Stanford University, Department of Computer Science
Title: Algorithmic aspects of vertex elimination on directed graphs.
Author: Rose, Donald J.
Author: Tarjan, Robert Endre
Date: November 1975
Abstract:
We consider a graph-theoretic elimination process which is related to
performing Gaussian elimination on sparse systems of linear eauations.
We give efficient algorithms to: (1) calculate the fill-in produced by
any elimination ordering; (2) find a perfect elimination ordering if
one exists; and (3) find a minimal elimination ordering. We also show
that problems (1) and (2) are at least as time-consuming as testing
whether a directed graph is transitive, and that the problem of finding
a minimum ordering is NP-complete.
CS-TR-75-531
Report Number: CS-TR-75-532
Institution: Stanford University, Department of Computer Science
Title: Bibliography of Computer Science Department technical reports.
Author: Jacobs, Patricia E.
Date: November 1975
Abstract:
This report lists, in chronological order, all reports from the
Stanford Computer Science series (STAN-CS-xx-xxx), Artificial
Intelligence Memos (AIM), Digital Systems Laboratory Technical reports
(TR) and Technical Notes (TN), plus Stanford Linear Accelerator Center
publications (SLACP) and reports (SLACR). Also, for the first time, we
have provided an author index for these reports (at the end of the
report listings). The bibliography issued in October of 1973 is hereby
brought up to date. Each report is identified by title, author's name,
National Technical Information Service (NTIS) retrieval number, date,
number of pages and the computer science areas treated. Subsequent
journal publication (when known) is also indicated.
CS-TR-75-532
Report Number: CS-TR-75-536
Institution: Stanford University, Department of Computer Science
Title: Interactive generation of object models with a manipulator.
Author: Grossman, David D.
Author: Taylor, Russell H.
Date: December 1975
Abstract:
Manipulator programs in a high level language consist of manipulation
procedures and object model declaratlons. As higher level languages are
developed, the procedures wlll shrink while the declarations will grow.
This trend makes it desirable to develop means for automating the
generation of these declarations. A system is proposed which would
permit users to specify certain object models interactively, using the
manipulator itself as a measuring tool in three dimensions. A
preliminary version of the system has been tested.
CS-TR-75-536
Report Number: CS-TR-75-537
Institution: Stanford University, Department of Computer Science
Title: Verification Vision within a programmable assembly system: an introductory discussion.
Author: Bolles, Robert C.
Date: December 1975
Abstract:
This paper defines a class of visual feedback tasks called
"Verification Vision" which includes a significant portion of the
feedback tasks required within a programmable assembly system. It
characterizes a set of general-purpose capabilities which, if
implemented, would provide a user with a system in which to write
programs to perform such tasks. Example tasks and protocols are used to
motivate these semantic capabilities. Of particular importance are the
tools required to extract as much information as possible from planning
and/or training sessions. Four different levels of verification systems
are discussed. They range from a straightforward interactive system
which could handle a subset of the verification vision tasks, to a
completely automatic system which could plan its own strategies and
handle the total range of verification tasks. Several unsolved problems
in the area are discussed.
CS-TR-75-537
Report Number: CS-TR-75-539
Institution: Stanford University, Department of Computer Science
Title: A new approach to recursive programs.
Author: Manna, Z ohar
Author: Shamir, Adi
Date: December 1975
Abstract:
In this paper we critically evaluate the classical least-fixedpoint
approach towards recursive programs. We suggest a new approach which
extracts the maximal amount of valuable information embedded in the
programs. The presentation is informal, with emphasis on examples.
CS-TR-75-539
Report Number: CS-TR-76-405
Institution: Stanford University, Department of Computer Science
Title: Stanford Computer Science Department research report.
Author: Davis, Randall
Author: Wright, Margaret H.
Date: January 1976
Abstract:
This collection of reports is divided into two sections. The first
contains the research summaries for individual faculty members and
research associates in the Computer Science Department. Two professors
from Electrical Engineering are included as "Affiliated Faculty"
because their interests are closely related to those of the Department.
The second section gives an overview of the activities of research
groups in the Department. "Group" here is taken to imply many different
things, including people related by various degrees of intellectual
interests, physical proximity, or funding considerations. We have tried
to describe any group whose scope of interest is greater than that of
one person. The list of recent publications for each is not intended to
be comprehensive, but rather to give a feeling for the range of topics
considered. This collection of reports has been assembled to provide a
reasonably comprehensive review of research activities in the
Department. We hope that it will be widely useful -- in particular,
students in the Department may find it helpful in discovering
interesting projects and possible thesis topics. We expect also that it
will be of interest to many other people, both within and outside the
Department.
CS-TR-76-405
Report Number: CS-TR-76-533
Institution: Stanford University, Department of Computer Science
Title:
A generalized conjugate gradient method for the numerical solution of
elliptic partial differential equations
Author: Concus, Paul
Author: Golub, Gene H.
Author: O'Leary, Dianne Prost
Date: January 1976
Abstract:
We consider a generalized conjugate gradient method for solving sparse,
symmetric, positive-definite systems of linear equations, principally
those arising from the discretization of boundary value problems for
elliptic partial differential equations. The method is based on
splitting off from the original coefficient matrix a symmetric,
positive-definite one that corresponds to a more easily solvable system
of equations, and then accelerating the associated iteration using
conjugate gradients. Optimality and convergence properties are
presented, and the relation to other methods is discussed. Several
splittings for which the method seems particularly effective are also
discussed, and for some, numerical examples are given.
CS-TR-76-533
Report Number: CS-TR-76-535
Institution: Stanford University, Department of Computer Science
Title: A generalized conjugate gradient method for nonsymmetric systems of linear equations
Author: Concus, Paul
Author: Golub, Gene H.
Date: January 1976
Abstract:
We consider a generalized conjugate gradient method for solving systems
of linear equations having nonsymmetric coefficient matrices with
positive-definite symmetric part. The method is based on splitting the
matrix into its symmetric and skew-symmetric parts, and then
accelerating the associated iteration using conjugate gradients, which
simplifies in this case, as only one of the two usual parameters is
required. The method is most effective for cases in which the symmetric
part of the matrix corresponds to an easily solvable system of
equations. Convergence properties are discussed, as well as an
application to the numerical solution of elliptic partial differential
equations.
CS-TR-76-535
Report Number: CS-TR-76-540
Institution: Stanford University, Department of Computer Science
Title: Addition chains with multiplicative cost
Author: Graham, Ronald L.
Author: Yao, Andrew Chi-Chih
Author: Yao, F. Frances
Date: January 1976
Abstract:
If each step in an addition chain is assigned a cost equal to the
product of the numbers added at that step, "binary" addition chains are
shown to minimize total cost.
CS-TR-76-540
Report Number: CS-TR-76-542
Institution: Stanford University, Department of Computer Science
Title: The theoretical aspects of the optimal fixedpoint
Author: Manna, Z ohar
Author: Shamir, Adi
Date: March 1976
Abstract:
In thls paper we define a new type of fixedpoint of recursive
definitions and investigate some of its properties. This optimal
fixedpoint (which always uniquely exists) contains, in some sense, the
maximal amount of "interesting" information which can be extracted from
the recursive definition, and it may be strictly more defined than the
program's least fixedpoint. This fixedpoint can be the basis for
assigning a new semantics to recursive programs. This is a modified and
extended version of part 1 of a paper presented at the Symposium on
Theory of Computing, Albuquerque, New Mexico (May 1975).
CS-TR-76-542
Report Number: CS-TR-76-543
Institution: Stanford University, Department of Computer Science
Title: Optimal polyphase sorting
Author: Z ave, Derek A.
Date: March 1976
Abstract:
A read-forward polyphase merge algorithm is described which performs
the polyphase merge starting from an arbitrary string distribution.
This algorithm minimizes the volume of information moved. Since this
volume is easily computed, it is possible to construct dispersion
algorithms which anticipate the merge algorithm. Two such dispersion
techniques are described. The first algorithm requires that the number
of strings to be dispersed be known in advance; this algorithm is
optimal. The second algorithm makes no such requirement, but is not
always optimal. In addition, performance estimates are derived and both
algorithmns are shown to be asymptotically optimal.
CS-TR-76-543
Report Number: CS-TR-76-544
Institution: Stanford University, Department of Computer Science
Title: Removing trivial assignments from programs
Author: Mont-Reynaud, Bernard
Date: March 1976
Abstract:
An assignment X $\leftarrow$ Y in a program is "trivial" when both X
and Y are simple program variables. The paper describes a
transformation which removes all such assignments from a program P,
producing a program P' which executes faster than P but usually has a
larger size. The number of variables used by P' is also minimized.
Worst-case analysis of the transformation algorithm leads to
nonpolynomial bounds. Such inefficiency, however, does not arise in
typical situations, and the technique appears to be of interest for
practical compiler optimization.
CS-TR-76-544
Report Number: CS-TR-76-545
Institution: Stanford University, Department of Computer Science
Title: Space bounds for a game on graphs
Author: Paul, Wolfgang L.
Author: Tarjan, Robert Endre
Author: Celoni, James R.
Date: March 1976
Abstract:
We study a one-person game played by placing pebbles, according to
certain rules, on the vertices of a directed graph. In [J. Hopcroft, W.
Paul, and L. Valiant, "On time versus space and related problems,"
Proc. 16th Annual Symp. on Foundations of Computer Science (1975),
pp.57-64] it was shown that for each graph with n vertices and maximum
in-degree d, there is a pebbling strategy which requires at most c(d)
n/log n pebbles. Here we show that this bound is tight to within a
constant factor. We also analyze a variety of pebbling algorithms,
including one which achieves the O(n/log n) bound.
CS-TR-76-545
Report Number: CS-TR-76-547
Institution: Stanford University, Department of Computer Science
Title: Iterative algorithms for global flow analysis
Author: Tarjan, Robert Endre
Date: March 1976
Abstract:
This paper studies iterative methods for the global flow analsis of
computer programs. We define a hierarchy of global flow problem
classes, each solvable by an appropriate generalization of the "node
listing" method of Kennedy. We show that each of these generalized
methods is optimum, among all iterative algorithms, for solving
problems within its class. We give lower bounds on the time required by
iterative algorithms for each of the problem classes.
CS-TR-76-547
Report Number: CS-TR-76-549
Institution: Stanford University, Department of Computer Science
Title: Automatic program verification V: verification-oriented proof rules for arrays, records and pointers
Author: Luckham, David C.
Author: Suzuki, Norihisa
Date: March 1976
Abstract:
A practical method is presented for automating in a uniform way the
verification of Pascal programs that operate on the standard Pascal
data structures ARRAY, RECORD, and POINTER. New assertion language
primitives are introduced for describing computational effects of
operations on these data structures. Axioms defining the semantics of
the new primitives are given. Proof rules for standard Pascal
operations on pointer variables are then defined in terms of the
extended assertion language. Similar rules for records and arrays are
special cases. An extensible axiomatic rule for the Pascal memory
allocation operation, NEW, is also given. These rules have been
implemented in the Stanford Pascal program verifier. Examples
illustrating the verification of programs which operate on list
structures implemented with pointers and records are discussed. These
include programs with side-effects.
CS-TR-76-549
Report Number: CS-TR-76-550
Institution: Stanford University, Department of Computer Science
Title: Finding a maximum independent set
Author: Tarjan, Robert Endre
Author: Trojanowski, Anthony E.
Date: June 1976
Abstract:
We present an algorithm which finds a maximum independent set in an
n-vertex graph in 0($2^{n/3}$) time. The algorithm can thus handle
graphs roughly three times as large as could be analyzed using a naive
algorithm.
CS-TR-76-550
Report Number: CS-TR-76-551
Institution: Stanford University, Department of Computer Science
Title: The state of the Art of Computer Programming
Author: Knuth, Donald E.
Date: June 1976
Abstract:
This report lists all corrections and changes to volumes 1 and 3 of
"The Art of Computer Programming," as of May 14, 1976. The changes
apply to the most recent printings of both volumes (February and March,
1975); if you have an earlier printing there have been many other
changes not indicated here.
CS-TR-76-551
Report Number: CS-TR-76-553
Institution: Stanford University, Department of Computer Science
Title: Complexity of monotone networks for computing conjunctions
Author: Tarjan, Robert Endre
Date: June 1976
Abstract:
Let $F_1$, $F_2$,..., $F_m$ be a set of Boolean functions of the form
$F_i$ = $\wedge$ {x$\in X_i$}, where $\wedge$ denotes conjunction and
each $X_i$ is a subset of a set X of n Boolean variables. We study the
size of monotone Boolean networks for computing such sets of functions.
We exhibit anomalous sets of conujunctions whose smallest monotone
networks contain disjunctions. We show that if |$F_i$| is sufficiently
small for all i, such anomalies cannot happen. We exhibit sets of m
conjunctions in n unknowns which require $c_2$m$\alpha$(m,n) binary
conjunctions, where $\alpha$(m,n) is a very slowly growing function
related to a functional inverse of Ackermann's function. This class of
examples shows that an algorithm given in [STAN-CS-75-512] for
computing functions defined on paths in trees is optimum to within a
constant factor.
CS-TR-76-553
Report Number: CS-TR-76-555
Institution: Stanford University, Department of Computer Science
Title: Monte Carlo simulation of tolerancing in discrete parts manufacturing and assembly
Author: Grossman, David D.
Date: May 1976
Abstract:
The assembly of discrete parts is strongly affected by imprecise
components, imperfect fixtures and tools, and inexact measurements. It
is often necessary to design higher precision into the manufacturing
and assembly process than is functionally needed in the final product.
Production engineers must trade off between alternative ways of
selecting individual tolerances in order to achieve minimum cost while
preserving product integrity. This paper describes a comprehensive
Monte Carlo method for systematically analysing the stochastic
implications of tolerancing and related forms of imprecision. The
method is illustrated by four examples, one of which is chosen from the
field of assembly by computer controlled manipulators.
CS-TR-76-555
Report Number: CS-TR-76-558
Institution: Stanford University, Department of Computer Science
Title: Is "sometime" sometimes better than "always"? Intermittent assertions in proving program correctness
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: March 1977
Abstract:
This paper explores a technique for proving the correctness and
termination of programs simultaneously. This approach, which we call
the intermittent-assertion method, involves documenting the program
with assertions that must be true at some time when control passes
through the corresponding point, but that need not be true every time.
The method, introduced by Burstall, promises to provide a valuable
complement to the more conventional methods. We first introduce the
intermittent-assertion method with a number of examples of correctness
and termination proofs. Some of these proofs are markedly simpler than
their conventional counterparts. On the other hand, we show that a
proof of correctness or termination by any of the conventional
techniques can be rephrased directly as a proof using intermittent
assertions. Finally, we show how the intermittent assertion method can
be applied to prove the validity of program transformations and the
correctness of continuously operating programs. This is a revised and
simplified version of a previous paper with the same title (AIM-281,
June 1976).
CS-TR-76-558
Report Number: CS-TR-76-559
Institution: Stanford University, Department of Computer Science
Title: Rank degeneracy and least squares problems
Author: Golub, Gene H.
Author: Klema, Virginia C.
Author: Stewart, Gilbert W.
Date: August 1976
Abstract:
This paper is concerned with least squares problems when the least
squares matrix A is near a matrix that is not of full rank. A
definition of numerical rank is given. It is shown that under certain
conditions when A has numerical rank r there is a distinguished r
dimensional subspace of the column space of A that is insensitive to
how it is approximated by r independent columns of A. The consequences
of this fact for the least squares problem are examined. Algorithms are
described for approximating the stable part of the column space of A.
CS-TR-76-559
Report Number: CS-TR-76-561
Institution: Stanford University, Department of Computer Science
Title: Mathematical Programming Language -- user's guide
Author: Woods, Donald R.
Date: August 1976
Abstract:
Mathematical Programming Language (MPL) is a aprogramming language
specifically designed for the implementation of mathematical software
and, in particular, experimental mathematical programming software. In
the past there has been a wide gulf between the applied mathematicians
who design mathematical algorithms (but often have little appreciation
of the fine points of computing) and the professional programmer, who
may have little or no understanding of the mathematics of the problem
he is programming. The result is that a vast number of mathematical
algorithms have been devised and published, with only a small fraction
being actually implemented and experimentally compared on selected
representative problems. MPL is designed to be as close as possible to
the terminology used by the mathematician while retaining as far as
possible programming sophistications which make for good software
systems. The result is a programming langauge which (hopefully!) allows
the writing of clear, concise, easily read programs, especially by
persons who are not professional programmers.
CS-TR-76-561
Report Number: CS-TR-76-568
Institution: Stanford University, Department of Computer Science
Title:
Exploratory study of computer integrated assembly systems. Progress
report 3, covering the period December 1, 1975 to July 31, 1976
Author: Binford, Thomas O.
Author: Grossman, David D.
Author: Liu, C. Richard
Author: Bolles, Robert C.
Author: Finkel, Raphael A.
Author: Mujtaba, M. Shahid
Author: Roderick, Michael D.
Author: Shimano, Bruce E.
Author: Taylor, Russell H.
Author: Goldman, Ronald H.
Author: Jarvis, J. Pitts, III
Author: Scheinman, Victor D.
Author: Gafford, Thomas A.
Date: August 1976
Abstract:
The Computer Integrated Assembly Systems project is concerned with
developing the software technology of programmable assembly devices,
including computer controlled manipulators and vision systems. A
complete hardware system has been implemented that includes
manipulators with tactile sensors and TV cameras, tools, fixtures, and
auxiliary devices, a dedicated minicomputer, and a time-shared large
computer equipped with graphic display terminals. An advanced software
system called AL has been developed that can be used to program
assembly applications. Research currently underway includes refinement
of AL, development of improved languages and interactive programming
techniques for assembly and vision, extension of computer vision to
areas which are currently infeasible, geometric modeling of objects and
constraints, assembly simulation, control algorithms, and adaptive
methods of calibration.
CS-TR-76-568
Report Number: CS-TR-76-569
Institution: Stanford University, Department of Computer Science
Title:
Calculation of interpolating natural spline functions using de Boor's
package for calculating with B-splines
Author: Herriot, John G.
Date: October 1976
Abstract:
A FORTRAN subroutine is described for finding interpolating natural
splines of odd degree for an arbitrary set of data points. The
subroutine makes use of several of the subroutines in de Boor's package
for calculating with B-splines. An Algol W translation of the
interpolating natural spline subroutine and of the required subroutines
of the de Boor package are also given. Timing tests and accuracy tests
for the routines are described.
CS-TR-76-569
Report Number: CS-TR-76-572
Institution: Stanford University, Department of Computer Science
Title: An FOL primer
Author: Filman, Robert E.
Author: Weyhrauch, Richard W.
Date: September 1976
Abstract:
This primer is an introduction to FOL, an interactive proof checker for
first order logic. Its examples can be used to learn the FOL system, or
read independently for a flavor of our style of interactive proof
checking. Several example proofs are presented, successively increasing
in the complexity of the FOL commands employed.
CS-TR-76-572
Report Number: CS-TR-76-573
Institution: Stanford University, Department of Computer Science
Title: The stationary p-tree forest
Author: Jonassen, Arne T.
Date: October 1976
Abstract:
This paper contains a theoretical analysis of the conditions of a
priority queue strategy after an infinite number of alternating
insert/remove steps. Expected insertion time, expected length, etc. are
found.
CS-TR-76-573
Report Number: CS-TR-76-574
Institution: Stanford University, Department of Computer Science
Title: SAIL
Author: Reiser, John F.
Date: August 1976
Abstract:
Sail is a high-level programming language for the PDP-10 computer. It
includes an extended ALGOL 60 compiler and a companion set of
execution-time routines. In addition to ALGOL, the language features:
(1) flexible linking to hand-coded machine language algorithms, (2)
complete access to the PDP-10 I/O facilities, (3) a complete system of
compile-time arithmetic and logic as well as a flexible macro system,
(4) a high-level debugger, (5) records and references, (6) sets and
lists, (7) an associative data structure, (8) independent processes,
(9) procedure variables, (10) user modifiable error handling, (11)
backtracking, and (12) interrupt facilities. This manual describes the
Sail language and the execution-time routines for the typical Sail
user: a non-novice programmer with some knowledge of ALGOL. It lies
somewhere between being a tutorial and a reference manual.
CS-TR-76-574
Report Number: CS-TR-76-575
Institution: Stanford University, Department of Computer Science
Title: SAIL tutorial
Author: Smith, Nancy W.
Date: October 1976
Abstract:
This tutorial is designed for a beginning user of Sail, an ALGOL-like
language for the PDP10. The first part covers the basic statements and
expressions of the language; remaining topics include macros, records,
conditional compilation, and input/output. Detailed examples of Sail
programming are included throughout, and only a minimum of programming
background is assumed.
CS-TR-76-575
Report Number: CS-TR-76-578
Institution: Stanford University, Department of Computer Science
Title: Theoretical and practical aspects of some initial-boundary value problems in fluid dynamics
Author: Oliger, Joseph
Author: Sundstroem, Arne
Date: November 1976
Abstract:
Initial-boundary value problems for several systems of partial
differential equations from fluid dynamics are discussed. Both rigid
wall and open boundary problems are treated. Boundary conditions are
formulated and shown to yield well-posed problems for the Eulerian
equations for gas dynamics, the shallow-water equations, and linearized
constant coefficient versions of the incompressible, anelastic
equations. The "primitive" hydrostatic meteorological equations are
shown to be ill-posed with any specification of local, pointwise
boundary conditions. Analysis of simplified versions of this system
illustrates the mechanism responsible for ill-posedness.
CS-TR-76-578
Report Number: CS-TR-76-579
Institution: Stanford University, Department of Computer Science
Title: The A0 inversion model of program paging behavior
Author: Baskett, Forest
Author: Rafii, Abbas
Date: November 1976
Abstract:
When the parameters of a simple stochastic model of the memory
referencing behavior of computer programs are carefully selected, the
model is able to mimic the paging behavior of a set of actual programs.
The mimicry is successful using several different page replacement
algorithms and a wide range of real memory sizes in a virtual memory
environment. The model is based on the independent reference model with
a new procedure for determining the page reference probabilities, the
parameters of the model. We call the result the A0 inversion
independent reference model. Since the fault rate (or miss ratio) is
one aspect of program behavior that the model is able to capture for
many different memory sizes, the model should be especially useful for
evaluating multilevel memory organizations based on newly emerging
memory technologies.
CS-TR-76-579
Report Number: CS-TR-76-580
Institution: Stanford University, Department of Computer Science
Title: Towards a procedural understanding of semantics
Author: Winograd, Terry A.
Date: November 1976
Abstract:
The term "procedural semantics" has been used in a variety of ways, not
all compatible, and not all comprehensible. In this paper, I have
chosen to apply the term to a broad paradigm for studying semantics
(and in fact, all of linguistics). This paradigm has developed in a
context of writing computer programs which use natural language, but it
is not a theory of computer programs or programming techniques. It is
"procedural" because it looks at the underlying structure of language
as fundamentally shaped by the nature of processes for language
production and comprehension. It is based on the belief that there is a
level of explanation at which there are significant similarities
between the psychological processes of human language use and the
computational processes in computer programs we can construct and
study. Its goal is to develop a body of theory at this level. This
approach necessitates abandoning or modifying several currently
accepted doctrines, including the way in which distinctions have been
drawn between "semantics" and "pragmatics" and between "performance"
and "competence". The paper has three major sections. It first lays out
the paradigm assumptions which guide the enterprise, and elaborates a
model of cognitive processing and language use. It then illustrates how
some specific semantic problems might be approached from a procedural
perspective, and contrasts the procedural approach with formal
structural and truth conditional approaches. Finally, it discusses the
goals of linguistic theory and the nature of the linguistic
explanation. Much of what is presented here is a speculation about the
nature of a paradigm yet to be developed. This paper is an attempt to
be evocative rather than definitive; to convey intuitions rather than
to formulate crucial arguments which justify this approach over others.
It will be successful if it suggests some ways of looking at language
which lead to further understanding.
CS-TR-76-580
Report Number: CS-TR-76-581
Institution: Stanford University, Department of Computer Science
Title: An overview of KRL, a Knowledge Representation Language
Author: Bobrow, Daniel G.
Author: Winograd, Terry A.
Date: November 1976
Abstract:
This paper describes KRL, a Knowledge Representation Language designed
for use in understander systems. It outlines both the general concepts
which underlie our research and the details of KRL-0, an experimental
implementation of some of these concepts. KRL is an attempt to
integrate procedural knowledge with a broad base of declarative forms.
These forms provide a variety of ways to express the logical structure
of the knowledge, in order to give flexibility in associating
procedures (for memory and reasoning) with specific pieces of
knowledge, and to control the relative accessibility of different facts
and descriptions. The formalism for declarative knowledge is based on
structured conceptual objects with associated descriptions. These
objects form a network of memory units with several different sorts of
linkages, each having well-specified implications for the retrieval
process. Procedures can be associated directly with the internal
structure of a conceptual object. This procedural attachment allows the
steps for a particular operation to be determined by characteristics of
the specific entities involved. The control structure of KRL is based
on the belief that the next generation of intelligent programs will
integrate data-directed and goal-directed processing by using
multi-processing. It provides for a priority-ordered multi-process
agenda with explicit (user-provided) strategies for scheduling and
resource allocation. It provides procedure directories which operate
along with process frameworks to allow procedural parameterization of
the fundamental system processes for building, comparing, and
retrieving memory structures. Future development of KRL will include
integrating procedure definition with the descriptive formalism.
CS-TR-76-581
Report Number: CS-TR-76-583
Institution: Stanford University, Department of Computer Science
Title: Determining the stability number of a graph
Author: Chvatal, Vaclav
Date: December 1976
Abstract:
We formalize certain rules for deriving upper bounds on the stability
number of a graph. The resulting system is powerful enough to (i)
encompass the algorithms of Tarjan's type and (ii) provide very short
proofs on graphs for which the stability number equals the
clique-covering number. However, our main result shows that for almost
all graphs with a (sufficiently large) linear number of edges, proofs
within our system must have at least exponential length.
CS-TR-76-583
Report Number: CS-TR-76-585
Institution: Stanford University, Department of Computer Science
Title:
Numerical solution of nonlinear elliptic partial differential equations
by a generalized conjugate gradient method
Author: Concus, Paul
Author: Golub, Gene H.
Author: O'Leary, Dianne Prost
Date: December 1976
Abstract:
We have studied previously a generallized conjugate gradient method for
solving sparse positive-definite systems of linear equations arising
from the discretization of ellilptic partial-differential
boundary-value problems. Here, extensions to the nonlinear case are
considered. We split the original discretized operator into the sum of
two operators, one of which corresponds to a more easily solvable
system of equations, and accelerate the associated iteration based on
this splitting by (nonlinear) conjugate gradients. The behavior of the
method is illustrated for the minimal surface equation with splittings
corresponding to nonlinear SSOR, to approximate factorization of the
Jacobian matrix, and to elliptic operators suitable for use with fast
direct methods. The results of numerical experiments are given as well
for a mildly nonlinear example, for which, in the corresponding linear
case, the finite termination property of the conjugate gradient
algorithm is crucial.
CS-TR-76-585
Report Number: CS-TR-76-586
Institution: Stanford University, Department of Computer Science
Title: The evolution of programs: a system for automatic program modification
Author: Dershowitz, Nachum
Author: Manna, Z ohar
Date: December 1976
Abstract:
An attempt is made to formulate techniques of program modification,
whereby a program that achieves one result can be transformed into a
new program that uses the same principles to achieve a different goal.
For example, a program that uses the binary search paradigm to
calculate the square-root of a number may be modified to divide two
numbers in a similar manner, or vice versa. Program debugging is
considered as a special case of modification: if a program computes
wrong results, it must be modified to achieve the intended results. The
application of abstract program schemata to concrete problems is also
viewed from the perspective of modification techniques. We have
embedded this approach in a running implementation; our methods are
illustrated with several examples that have been performed by it.
CS-TR-76-586
Report Number: CS-TR-77-432
Institution: Stanford University, Department of Computer Science
Title: A users manual for FOL.
Author: Weyhrauch, Richard W.
Date: July 1977
Abstract:
This manual explains how to use of the proof checker FOL, and
supersedes all previous manuals. FOL checks proofs of a natural
deduction style formulation of first order functional calculus with
equality augumented in the following ways: (i) it is a many-sorted
first-order logic in which a partial order over the sorts may be
specified; (ii) conditional expressions are allowed for forming terms
(iii) axiom schemata with predicate and function parameters are allowed
(iv) purely propositional deductions can be made in a single step; (v)
a partial model of the language can be built in a LISP environment and
some deductions can be made by direct computation in this model; (vi)
there is a limited ability to make metamathematical arguments; (vii)
there are many operational conveniences. A major goal of FOL is to
create an environment where formal proofs can be carefully examined
with the eventual aim of designing practical tools for manipulating
proofs in pure mathematics and about the correctness of programs. This
includes checking proofs generated by other programs. FOL is also a
research tool in modeling common-sense reasoning including reasoning
about knowledge and belief.
CS-TR-77-432
Report Number: CS-TR-77-588
Institution: Stanford University, Department of Computer Science
Title: On computing the singular value decomposition
Author: Chan, Tony Fan C.
Date: February 1977
Abstract:
The most well-known and widely-used algorithm for computing the
Singular Value Decomposition (SVD) of an m x n rectangular matrix A
nowadays is the Golub-Reinsch algorithm [1971]. In this paper, it is
shown that by (1) first triangularizing the matrix A by Householder
transformations before bidiagonalizing it, and (2) accumulating some
left transformations on a n x n array instead of on an m x n array, the
resulting algorithm is often more efficient than the Golub-Reinsch
algorithm, especially for matrices with considerably more rows than
columns (m >> n), such as in least squares applications. The two
algorithms are compared in terms of operation counts, and computational
experiments that have been carried out verify the theoretical
comparisons. The modified algorithm is more efficient even when m is
only slightly greater than n, and in some cases can achieve as much as
50% savings when m >> n. If accumulation of left transformations
is desired, then $n^2$ extra storage locations are required (relatively
small if m >> n), but otherwise no extra storage is required. The
modified algorithm uses only orthogonal transformations and is
therefore numerically stable. In the Appendix, we give the Fortran code
of a hybrid method which automatically selects the more efficient of
the two algorithms to use depending upon the input values for m and n.
CS-TR-77-588
Report Number: CS-TR-77-589
Institution: Stanford University, Department of Computer Science
Title: A knowledge-based system for the interpretation of protein x-ray crystallographic data
Author: Engelmore, Robert S.
Author: Nii, H. Penny
Date: February 1977
Abstract:
The broad goal of this project is to develop intelligent computational
systems to infer the three-dimensional structures of proteins from
x-ray crystallographic data. The computational systems under
development use both formal and judgmental knowledge from experts to
select appropriate procedures and to constrain the space of plausible
protein structures. The hypothesis generating and testing procedures
operate upon a variety of representations of the data, and work with
several different descriptions of the structure being inferred. The
system consists of a number of independent but cooperating knowledge
sources which propose, augment and verify a solution to the problem as
it is incrementally generated.
CS-TR-77-589
Report Number: CS-TR-77-593
Institution: Stanford University, Department of Computer Science
Title: Explanation capabilities of production-based consultation systems
Author: Scott, A. Carlisle
Author: Clancey, William J.
Author: Davis, Randall
Author: Shortliffe, Edward H.
Date: February 1977
Abstract:
A computer program that models an expert in a given domain is more
likely to be accepted by experts in that domain, and by non-experts
seeking its advice, if the system can explain its actions. An
explanation capability not only adds to the system's credibility, but
also enables the non-expert user to learn from it. Furthermore, clear
explanations allow an expert to check the system's "reasoning",
possibly discovering the need for refinements and additions to the
system's knowledge base. In a developing system, an explanation
capability can be used as a debugging aid to verify that additions to
the system are working as they should. This paper discusses the general
characteristics of explanation systems: what types of explanations they
should be able to give, what types of knowledge will be needed in order
to give these explanations, and how this knowledge might be organized.
The explanation facility in MYCIN is discussed as an illustration of
how the various problems might be approached.
CS-TR-77-593
Report Number: CS-TR-77-596
Institution: Stanford University, Department of Computer Science
Title: A review of knowledge based problem solving as a basis for a genetics experiment designing system
Author: Stefik, Mark J.
Author: Martin, Nancy
Date: February 1977
Abstract:
It is generally accepted that problem solving systems require a wealth
of domain specific knowledge for effective performance in complex
domains. This report takes the view that all domain specific knowledge
should be expressed in a knowledge base. With this in mind, the ideas
and techniques from problem solving and knowledge base research are
reviewed and outstandlng problems are identified. Finally, a task
domain is characterized in terms of objects, actions, and
control/strategy knowledge and suggestions are made for creating a
uniform knowledge base management system to be used for knowledge
acquisition, problem solving, and explanation.
CS-TR-77-596
Report Number: CS-TR-77-597
Institution: Stanford University, Department of Computer Science
Title: Model-directed learning of production rules
Author: Buchanan, Bruce G.
Author: Mitchell, Tom M.
Date: March 1977
Abstract:
The Meta-DENDRAL program is described in general terms that are
intended to clarify the similarities and differences to other learning
programs. Its approach of model-directed heuristic search through a
complex space of possible rules appears well suited to many induction
tasks. The use of a strong model of the domain to direct the rule
search has been demonstrated for rule formation in two areas of
chemistry. The high performance of programs which use the generated
rules attests to the success of this learning strategy.
CS-TR-77-597
Report Number: CS-TR-77-602
Institution: Stanford University, Department of Computer Science
Title: The numerically stable reconstruction of a Jacobi matrix from spectral data
Author: Boor, Carl de
Author: Golub, Gene H.
Date: March 1977
Abstract:
We show how to construct, from certain spectral data, a discrete inner
product for which the associated sequence of monic orthogonal
polynomials coincides with the sequence of appropriately normalized
characteristic polynomials of the left principal submatrices of the
Jacobi matrix. The generation of these orthogonal polynomials via their
three term recurrence relation, as popularized by Forsythe, then
provides a stable means of computing the entries of the Jacobi matrix.
The resulting algorithm might be of help in the approximate solution of
inverse eigenvalue problems for Sturm-Liouville equations. Our
construction provides, incidentally, very simple proofs of known
results concerning existence and uniqueness of a Jacobi matrix
satisfying given spectral data and its continuous dependence on that
data.
CS-TR-77-602
Report Number: CS-TR-77-603
Institution: Stanford University, Department of Computer Science
Title: Reference machines require non-linear time to maintain disjoint sets
Author: Tarjan, Robert Endre
Date: March 1977
Abstract:
This paper describes a machine model intended to be useful in deriving
realistic complexity bounds for tasks requiring list processing. As an
example of the use of the model, the paper shows that any such machine
requires non-linear time in the worst case to compute unions of
disjoint sets on-line. All set union algorithms known to me are
instances of the model and are thus subject to the derived bound. One
of the known algorithms achieves the bound to within a constant factor.
CS-TR-77-603
Report Number: CS-TR-77-604
Institution: Stanford University, Department of Computer Science
Title: Control of the dissipativity of Lax-Wendroff type methods for first order systems or hyperbolic equations
Author: Chan, Tony Fan C.
Author: Oliger, Joseph
Date: March 1977
Abstract:
Lax-Wendroff methods for hyperbolic systems have two characteristics
which are sometimes troublesome. They are sometimes too dissipative --
they may smooth the solution excessively -- and their dissipative
behavior does not affect all modes of the solution equally. Both of
these difficulties can be remedied by adding properly chosen accretive
terms. We develop modifications of the Lax-Wendroff method which
equilibrate the dissipativity over the fundamental modes of the
solution and allow the magnitude of the dissipation to be controlled.
We show that these methods are stable for the mixed initial boundary
value problem and develop analogous formulations for the two-step
Lax-Wendroff and MacCormack methods.
CS-TR-77-604
Report Number: CS-TR-77-605
Institution: Stanford University, Department of Computer Science
Title: A model for learning systems
Author: Smith, Reid G.
Author: Mitchell, Tom M.
Author: Chestek, Richard A.
Author: Buchanan, Bruce G.
Date: March 1977
Abstract:
A model for learning systems is presented, and representative AI,
pattern recognition, and control systems are discussed in terms of its
framework. The model details the functional components felt to be
essential for any learning system, independent of the techniques used
for its construction, and the specific environment in which it
operates. These components are performance element, instance selector,
critic, learning element, blackboard, and world model. Consideration of
learning system design leads naturally to the concept of a layered
system, each layer operating at a different level of abstraction.
CS-TR-77-605
Report Number: CS-TR-77-606
Institution: Stanford University, Department of Computer Science
Title: A programming and problem-solving seminar
Author: Clancy, Michael J.
Author: Knuth, Donald E.
Date: April 1977
Abstract:
This report contains edited transcripts of the discussions held in
Stanford's course CS 204, Problem Seminar, during autumn quarter 1976.
Since the topics span a large range of ideas in computer science, and
since most of the important research paradigms and programming
paradigms came up during the discussions, the notes may be of use to
graduate students of computer science at other universities, as well as
to their professors and to professional people in the "real world".
CS-TR-77-606
Report Number: CS-TR-77-607
Institution: Stanford University, Department of Computer Science
Title: Specifications and proofs for abstract data types in concurrent programs
Author: Owicki, Susan S.
Date: April 1977
Abstract:
Shared abstract data types, such as queues and buffers, are useful
tools for building well-structured concurrent programs. This paper
presents a method for specifying shared types in a way that simplifies
concurrent program verification. The specifications describe the
operations of the shared type in terms of their effect on variables of
the process invoking the operation. This makes it possible to verify
the processes independently, reducing the complexity of the proof. The
key to defining such specifications is the concept of a private
variable: a variable which is part of a shared object but belongs to
just one process. Shared types can be implemented using an extended
form of monitors; proof rules are given for verifying that a monitor
correctly implements its specifications. Finally, it is shown how
concurrent programs can be verified using the specifications of their
shared types. The specification and proof techniques are illustrated
with a number of examples involving a shared bounded buffer.
CS-TR-77-607
Report Number: CS-TR-77-609
Institution: Stanford University, Department of Computer Science
Title: Complexity of combinatorial algorithms
Author: Tarjan, Robert Endre
Date: April 1977
Abstract:
This paper examines recent work on the complexity of combinatorial
algorithms, highlighting the aims of the work, the mathematical tools
used, and the important results. Included are sections discussing ways
to measure the complexity of an algorithm, methods for proving that
certain problems are very hard to solve, tools useful in the design of
good algorithms, and recent improvements in algorithms for solving ten
representative problems. The final section suggests some directions for
future research.
CS-TR-77-609
Report Number: CS-TR-77-611
Institution: Stanford University, Department of Computer Science
Title: The logic of computer programming
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: August 1977
Abstract:
Techniques derived from mathematical logic promise to provide an
alternative to the conventional methodology for constructing,
debugging, and optimizing computer programs. Ultimately, these
techniques are intended to lead to the automation of many of the facets
of the programming process. This paper provides a unified tutorial
exposition of the logical techniques, illustrating each with examples.
The strengths and limitations of each technique as a practical
programming aid are assessed and attempts to implement these methods in
experimental systems are discussed.
CS-TR-77-611
Report Number: CS-TR-77-614
Institution: Stanford University, Department of Computer Science
Title: The convergence of functions to fixedpoints of recursive definitions
Author: Manna, Z ohar
Author: Shamir, Adi
Date: May 1977
Abstract:
The classical method for constructing the least fixedpoint of a
recursive definition is to generate a sequence of functions whose
initial element is the totally undefined function and which converges
to the desired least fixedpoint. This method, due to Kleene, cannot be
generalized to allow the construction of other fixedpoints. In this
paper we present an alternate definition of convergence and a new
fixedpoint access method of generating sequences of functions for a
given recursive definition. The initial function of the sequence can be
an arbitrary function, and the sequence will always converge to a
fixedpoint that is "close" to the initial function. This defines a
monotonic mapping from the set of partial functions onto the set of all
fixedpoints of the given recursive definition.
CS-TR-77-614
Report Number: CS-TR-77-615
Institution: Stanford University, Department of Computer Science
Title: Numerical methods for the first biharmonic equation and for the two-dimensional Stokes problem
Author: Glowinski, Roland
Author: Pironneau, Olivier
Date: May 1977
Abstract:
We describe in this report various methods, iterative and "almost
direct," for solving the first biharmonic problem on general
two-dimensional domains once the continuous problem has been
approximated by an appropriate mixed finite element method. Using the
approach described in this report we recover some well known methods
for solving the first biharmonic equation as a system of coupled
harmonic equations, but some of the methods discussed here are
completely new, including a conjugate gradient type algorithm. In the
last part of this report we discuss the extension of the above methods
to the numerical solution of the two dimensional Stokes problem in p-
connected domains (p $\geq$ 1) through the stream function-vorticity
formulation.
CS-TR-77-615
Report Number: CS-TR-77-616
Institution: Stanford University, Department of Computer Science
Title: Stability of the Fourier method
Author: Kreiss, Heinz-Otto
Author: Oliger, Joseph
Date: August 1977
Abstract:
In this paper we develop a stability theory for the Fourier (or
pseudo-spectral) method for linear hyperbolic and parabolic partial
differential equations with variable coefficients.
CS-TR-77-616
Report Number: CS-TR-77-618
Institution: Stanford University, Department of Computer Science
Title: A production system for automatic deduction
Author: Nilsson, Nils J.
Date: August 1977
Abstract:
A new predicate calculus deduction system based on production rules is
proposed. The system combines several developments in Artificial
Intelligence and Automatic Theorem Proving research including the use
of domain-specific inference rules and separate mechanisms for forward
and backward reasoning. It has a clean separation between the data
base, the production rules, and the control system. Goals and subgoals
are maintained in an AND/OR tree to represent assertions. The
production rules modify these structures untll they "connect" in a
fashion that proves the goal theorem. Unlike some previous systems that
used production rules, ours is not limited to rules in Horn Clause
form. Unlike previous PLANNER-like systems, ours can handle the full
range of predicate calculus expressions including those with quantified
variables, disjunctions and negations.
CS-TR-77-618
Report Number: CS-TR-77-619
Institution: Stanford University, Department of Computer Science
Title: Time-space trade-offs in a pebble game
Author: Paul, Wolfgang J.
Author: Tarjan, Robert Endre
Date: July 1977
Abstract:
A certain pebble game on graphs has been studied in various contexts as
a model for the time and space requirements of computations. In this
note it is shown that there exists a family of directed acyclic graphs
$G_n$ and constants $c_1$, $c_2$, $c_3$ such that (1) $G_n$ has n nodes
and each node in $G_n$ has indegree at most 2. (2) Each graph $G_n$ can
be pebbled with $c_1\sqrt{n}$ pebbles in n moves. (3) Each graph $G_n$
can also be pebbled with $C_2\sqrt{n}$ pebbles, $c_2$ < $c_1$, but
every strategy which achieves this has at least $2^{c_3\sqrt{n}}$ moves.
CS-TR-77-619
Report Number: CS-TR-77-621
Institution: Stanford University, Department of Computer Science
Title: The art of artificial intelligence: I. Themes and case studies of knowledge engineering
Author: Feigenbaum, Edward A.
Date: August 1977
Abstract:
The knowledge engineer practices the art of bringing the principles and
tools of AI research to bear on difficult applications problems
requiring experts' knowledge for their solution. The technical issues
of acquiring this knowledge, representing it, and using it
appropriately to construct and explain lines-of-reasoning, are
important problems in the design of knowledge-based systems. Various
systems that have achieved expert level performance in scientific and
medical inference illuminates the art of knowledge engineering and its
parent science, Artificial Intelligence.
CS-TR-77-621
Report Number: CS-TR-77-624
Institution: Stanford University, Department of Computer Science
Title: Recent research in computer science.
Author: McCarthy, John
Author: Binford, Thomas O.
Author: Green, Cordell C.
Author: Luckham, David C.
Author: Manna, Z ohar
Author: Winograd, Terry A.
Author: Earnest, Lester D.
Date: June 1977
Abstract:
This report summarizes recent accomplishments in six related areas: (1)
basic AI research and formal reasoning, (2) image understanding, (3)
mathematical theory of computation, (4) program verification, (5)
natural language understanding, and (6) knowledge based programming.
CS-TR-77-624
Report Number: CS-TR-77-625
Institution: Stanford University, Department of Computer Science
Title: A fast merging algorithm
Author: Brown, Mark R.
Author: Tarjan, Robert Endre
Date: August 1977
Abstract:
We give an algorithm which merges sorted lists represented as balanced
binary trees. If the lists have lengths m and n (m $\leq$ n), then the
merging procedure runs in O(m log n/m) steps, which is the same order
as the lower bound on all comparison-based algorithms for this problem.
CS-TR-77-625
Report Number: CS-TR-77-626
Institution: Stanford University, Department of Computer Science
Title: On the loop switching addressing problem
Author: Yao, Andrew Chi-Chih
Date: October 1977
Abstract:
The following graph addressing problem was studied by Graham and Pollak
in devising a routing scheme for Pierce's Loop Switching Network. Let G
be a graph with n vertices. It is desired to assign to each vertex
$v_i$ an address in ${{0,1,*}}^\ell$, such that the Hamming distance
between the addresses of any two vertices agrees with their distance in
G. Let N(G) be the minimum length $\ell$ for which an assignment is
possible. It was shown by Graham and Pollak that N(G) $\leq m_G$(n-1),
where $m_G$ is the diameter of G. In the present paper, we shall prove
that N(G) $\leq 1.09(lg m_G$)n + 8n by an explicit construction. This
shows in particular that any graph has an addressing scheme of length
O(n log n).
CS-TR-77-626
Report Number: CS-TR-77-627
Institution: Stanford University, Department of Computer Science
Title: A separator theorem for planar graphs
Author: Lipton, Richard J.
Author: Tarjan, Robert Endre
Date: October 1977
Abstract:
Let G be any n-vertex planar graph. We prove that the vertices of G can
be partitioned into three sets A,B,C such that no edge joins a vertex
in A with a vertex in B, neither A nor B contains more than 2n/3
vertices, and C contains no more than $2\sqrt{2}\sqrt{2}$ vertices. We
exhibit an algorithm which finds such a partition A,B,C in O(n) time.
CS-TR-77-627
Report Number: CS-TR-77-628
Institution: Stanford University, Department of Computer Science
Title: Applications of a planar separator theorem
Author: Lipton, Richard J.
Author: Tarjan, Robert Endre
Date: October 1977
Abstract:
Any n-vertex planar graph has the property that it can be divided into
components of roughly equal size by removing only O($\sqrt{n}$)
vertices. This separator theorem, in combination with a
divide-and-conquer strategy, leads to many new complexity results for
planar graph problems. This paper describes some of these results.
CS-TR-77-628
Report Number: CS-TR-77-629
Institution: Stanford University, Department of Computer Science
Title: The complexity of pattern matching for a random string
Author: Yao, Andrew Chi-Chih
Date: October 1977
Abstract:
We study the average-case complexity of finding all occurrences of a
given pattern $\alpha$ in an input text string. Over an alphabet of q
symbols, let c($\alpha$,n) be the minimum average number of characters
that need to be examined in a random text string of length n. We prove
that, for large m, almost all patterns $\alpha$ of length m satisfy
c($\alpha$,n) = $\Theta (\lceil \log_q (${n-m}/{ln m} + 2)\rceil )$ if
$m \leq n \leq 2m$, and c($\alpha$,n) = $\Theta ({\lceil \log_q
m\rceil}/m n)$ if n > 2m. This in particular confirms a conjecture
raised in a recent paper by Knuth, Morris, and Pratt [1977].
CS-TR-77-629
Report Number: CS-TR-77-631
Institution: Stanford University, Department of Computer Science
Title: Inference rules for program annotation
Author: Dershowitz, Nachum
Author: Manna, Z ohar
Date: October 1977
Abstract:
Methods are presented whereby an Algol-like program, given together
with its specifications, can be documented automatically. The program
is incrementally annotated with invariant relationships that hold
between program variables at intermediate points in the program and
explain the acutal workings of the program regardless of whether the
program is correct. Thus this documentation can be used for proving the
correctness of the program or may serve as an aid in the debugging of
an incorrect program. The annotation techniques are formulated as
Hoare-llike inference rules which derive invariants from the assignment
statements, from the control structure of the program, or,
heuristically, from suggested invariants. The application of these
rules is demonstrated by two examples which have run on an experimental
implementation.
CS-TR-77-631
Report Number: CS-TR-77-634
Institution: Stanford University, Department of Computer Science
Title: A new proof of global convergence for the tridiagonal QL algorithm
Author: Hoffmann, Walter
Author: Parlett, Beresford N.
Date: October 1977
Abstract:
By exploiting the relation of the QL algorithm to inverse iteration we
obtain a proof of global convergence which is more conceptual and less
computational than previous analyses. The proof uses a new, but simple,
error estimate for the first step of inverse iteration.
CS-TR-77-634
Report Number: CS-TR-77-635
Institution: Stanford University, Department of Computer Science
Title: A block Lanczos method to compute the singular values and corresponding singular vectors of a matrix
Author: Golub, Gene H.
Author: Luk, Franklin T.
Author: Overton, Michael L.
Date: October 1977
Abstract:
We present a block Lanczos method to compute the largest singular
values and corresponding left and right singular vectors of a large
sparse matrix. Our algorithm does not transform the matrix A but
accesses it only through a user-supplied routine which computes AX or
$A^t$X for a given matrix X. This paper also includes a thorough
discussion of the various ways to compute the singular value
decomposition of a banded upper triangular matrix; this problem arises
as a subproblem to be solved during the block Lanczos procedure.
CS-TR-77-635
Report Number: CS-TR-77-636
Institution: Stanford University, Department of Computer Science
Title: $C^m$ convergence of trigonometric interpolants
Author: Bube, Kenneth P.
Date: October 1977
Abstract:
For m $\geq$ 0, we obtain sharp estimates of the uniform accuracy of
the m-th derivative of the n-point trigonometric interpolant of a
function for two classes of periodic functions on R. As a corrollary,
the n-point interpolant of a function in $C^k$ uniformly approximates
the function to order o($n^{1/2-k}$), improving the recent estimate of
O($n^{1-k}$). These results remain valid if we replace the
trigonometric interpolant by its K-th partial sum, replacing n by K in
the estimates.
CS-TR-77-636
Report Number: CS-TR-77-637
Institution: Stanford University, Department of Computer Science
Title: On the gap structure of sequences of points on a circle
Author: Ramshaw, Lyle H.
Date: November 1977
Abstract:
Considerable mathematical effort has gone into studying sequences of
points in the interval (0,1) which are evenly distributed, in the sense
that certain intervals contain roughly the correct percentages of the
first n points. This paper explores the related notion in which a
sequence is evenly distributed if its first n points split a given
circle into intervals which are roughly equal in length, regardless of
their relative positions. The sequence $x_k$ = ($\log_2$(2k-1) mod 1)
was introduced in this context by DeBruijn and Erdoes. We will see that
the gap structure of this sequence is uniquely optimal in a certain
sense, and optimal under a wide class of measures.
CS-TR-77-637
Report Number: CS-TR-77-638
Institution: Stanford University, Department of Computer Science
Title: A generalized conjugate gradient algorithm for solving a class of quadratic programming problems
Author: O'Leary, Dianne Prost
Date: December 1977
Abstract:
In this paper we apply matrix splitting techniques and a conjugate
gradient algorithm to the problem of minimizing a convex quadratic form
subject to upper and lower bounds on the variables. This method
exploits sparsity structure in the matrix of the quadratic form.
Choices of the splitting operator are discussed and convergence results
are established. We present the results of numerical experiments
showing the effectiveness of the algorithm on free boundary problems
for elliptic partial differential equations, and we give comparisons
with other algorithms.
CS-TR-77-638
Report Number: CS-TR-77-639
Institution: Stanford University, Department of Computer Science
Title: On program synthesis knowledge
Author: Green, Cordell C.
Author: Barstow, David R.
Date: November 1977
Abstract:
This paper presents a body of program synthesis knowledge dealing with
array operations, space reutilization, the divide and conquer paradigm,
conversion from recursive paradigms to iterative paradigms, and ordered
set enumerations. Such knowledge can be used for the synthesis of
efficient and in-place sorts including quicksort, mergesort, sinking
sort, and bubble sort, as well as other ordered set operations such as
set union, element removal, and element addition. The knowledge is
explicated to a level of detail such that it is possible to codify this
knowledge as a set of program synthesis rules for use by a
computer-based synthesis system. The use and content of this set of
programming rules is illustrated herein by the methodical synthesis of
bubble sort, sinking sort, quicksort, and mergesort.
CS-TR-77-639
Report Number: CS-TR-77-640
Institution: Stanford University, Department of Computer Science
Title: Structured programming with recursion
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: January 1978
Abstract: No abstract available.
CS-TR-77-640
Report Number: CS-TR-77-642
Institution: Stanford University, Department of Computer Science
Title: On constructing minimum spanning trees in k-dimensional spaces and related problems
Author: Yao, Andrew Chi-Chih
Date: December 1977
Abstract:
The problem of finding a minimum spanning tree connecting n points in a
k-dimensional space is discussed under three common distance metrics --
Euclidean, rectilinear, and $L_\infty$. By employing a subroutine that
solves the post office problem, we show that, for fixed k $\geq$ 3,
such a minimum spanning tree can be found in time O($n^{2-a(k)} {(log
n)}^{1-a(k)}$), where a(k) = $2^{-(k+1)}$. The bound can be improved to
O(${(n log n)}^{1.8}$) for points in the 3-dimensional Euclidean space.
We also obtain o($n^2$) algorithms for finding a farthest pair in a set
of n points and for other related problems.
CS-TR-77-642
Report Number: CS-TR-77-645
Institution: Stanford University, Department of Computer Science
Title: Generalized nested dissection
Author: Lipton, Richard J.
Author: Rose, Donald J.
Author: Tarjan, Robert Endre
Date: December 1977
Abstract:
J. A. George has discovered a method, called nested dissection, for
solving a system of linear equations defined on an n = k $\times$ k
square grid in O(n log n) space and O($n{3/2}$) time. We generalize
this method without degrading the time and space bounds so that it
applies to any system of equations defined on a planar or almost-planar
graph. Such systems arise in the solution of two-dimensional finite
element problems. Our method uses the fact that planar graphs have good
separators. More generally, we show that sparse Gaussian elimination is
efficient for any class of graphs which have good separators, and
conversely that graphs without good separators (including almost all
sparse graphs) are not amenable to sparse Gaussian elimination.
CS-TR-77-645
Report Number: CS-TR-77-646
Institution: Stanford University, Department of Computer Science
Title: Fast decision algorithms based on congruence closure
Author: Nelson, Charles Gregory
Author: Oppen, Derek C.
Date: February 1978
Abstract:
We define the notion of the 'congruence closure' of a relation on a
graph and give a simple algorithm for computing it. We then give
decision procedures for the quantifier-free theory of equality and the
quantifier-free theory of LISP list structure, both based on this
algorithm. The procedures are fast enough to be practical in mechanical
theorem proving: each procedure determines the satisfiability of a
conjunction of length n of literals in time O($n^2$). We also show that
if the axiomatization of the theory of list structure is changed
slightly, the problem of determining the satisfiability of a
conjunction of literals becomes NP-complete. We have implemented the
decision procedures in our simplifier for the Stanford Pascal Verifier.
CS-TR-77-646
Report Number: CS-TR-77-647
Institution: Stanford University, Department of Computer Science
Title: A lower bound to palindrome recognition by probabilistic Turing machines
Author: Yao, Andrew Chi-Chih
Date: December 1977
Abstract:
We call attention to the problem of proving lower bounds on
probabilistic Turing machine computations. It is shown that any
probabilisitc Turing machine recognizing the language L = {w $\phi$ w |
w $\epsilon$ ${{0,1}}^*$} with error $\lambda$ < 1/2 must take
$\Omega$(n log n) time.
CS-TR-77-647
Report Number: CS-TR-78-649
Institution: Stanford University, Department of Computer Science
Title: DENDRAL and Meta-DENDRAL: their applications dimension
Author: Buchanan, Bruce G.
Author: Feigenbaum, Edward A.
Date: February 1978
Abstract:
The DENDRAL and Meta-DENDRAL programs assist chemists with data
interpretation problems. The design of each program is described in the
context of the chemical inference problems the program solves. Some
chemical results produced by the programs are mentioned.
CS-TR-78-649
Report Number: CS-TR-78-651
Institution: Stanford University, Department of Computer Science
Title: Proving termination and multiset orderings
Author: Dershowitz, Nachum
Author: Manna, Z ohar
Date: March 1978
Abstract:
A common tool for proving the termination of programs is the
well-founded set, a set ordered in such a way as to admit no infinite
descending sequences. The basic approach is to find a termination
function that maps the elements of the program into some well-founded
set, such that the value of the termination function is continually
reduced throughout the computation. All too often, the termination
functions required are difficult to find and are of a complexity out of
proportion to the program under consideration. However, by providing
more sophisticated well-founded sets, the corresponding termination
functions can be simplified. Given a well-founded set S, we consider
multisets over S, "sets" that admit multiple occurrences of elements
taken from S. We define an ordering on all finite multisets over S that
is induced by the given ordering on S. This multiset ordering is shown
to be well-founded. The value of the multiset ordering is that it
permits the use of relatively simple and intuitive termination
functions in otherwise difficult termination proofs. In particular, we
apply the multiset ordering to provide simple proofs of the termination
of production systems, programs defined in terms of sets of rewriting
rules.
CS-TR-78-651
Report Number: CS-TR-78-652
Institution: Stanford University, Department of Computer Science
Title: Simplification by cooperating decision procedures
Author: Nelson, Charles Gregory
Author: Oppen, Derek C.
Date: April 1978
Abstract:
We describe a simplifier for use in program manipulation and
verification. The simplifier finds a normal form for any expression
over the language consisting of individual variables, the usual boolean
connectives, equality, the conditional function cond (denoting
if-then-else), the numerals, the arithmetic functions and predicates +,
- and $\leq$, the LISP constants, functions and predicates nil, car,
cdr, cons and atom, the functions store and select for storing into and
selecting from arrays, and uninterpreted function symbols. Individual
variables range over the union of the reals, the set of arrays, LISP
list structure and the booleans true and false. The simplifier is
complete; that is, it simplifies every valid formula to true. Thus it
is also a decision procedure for the quantifier-free theory of reals,
arrays and list structure under the above functions and predicates. The
organization of the simplifier is based on a method for combining
decision procedures for several theories into a single decision
procedure for a theory combining the original theories. More precisely,
given a set S of functions and predicates over a fixed domain, a
satisfiability program for S is a program which determines the
satisfiability of conjunctions of literals (signed atomic formulas)
whose predicate and function symbols are in S. We give a general
procedure for combining satisfiability programs for sets S and T into a
single satisfiability program for S $\cup$ T, given certain conditions
on S and T. The simplifier described in this paper is currently used in
the Stanford Pascal Verifier.
CS-TR-78-652
Report Number: CS-TR-78-653
Institution: Stanford University, Department of Computer Science
Title: Multi-terminal 0-1 flow
Author: Shiloach, Yossi
Date: April 1978
Abstract:
Given an undirected 0-1 flow network with n vertices and m edges, we
present an O($n^2$(m+n)) algorithm which generates all ($n\choose 2$)
maximal flows between all the pairs of vertices. Since O($n^2$(m+n)) is
also the size of the output, this algorithm is optimal up to a constant
factor.
CS-TR-78-653
Report Number: CS-TR-78-654
Institution: Stanford University, Department of Computer Science
Title: The two paths problem is polynomial
Author: Shiloach, Yossi
Date: April 1978
Abstract:
Given an undirected graph G = (V,E) and vertices
$s_1$,$t_1$;$s_2$,$t_2$, the problem is to determine whether or not G
admits two vertex disjoint paths $P_1$ and $P_2$, connecting $s_1$ with
$t_1$ and $s_2$ with $t_2$ respectively. This problem is solved by an
O($n\cdot m$) algorithm (n = |V|, m = |E|). An important by-product of
the paper is a theorem that states that if G is 4-connected and
non-planar, then such paths $P_1$ and $P_2$ exist for any choice of
$s_1$, $s_2$, $t_1$, and $t_2$, (as was conjectured by Watkins [1968]).
CS-TR-78-654
Report Number: CS-TR-78-655
Institution: Stanford University, Department of Computer Science
Title: On accuracy and unconditional stability of linear multistep methods for second order differential equations
Author: Dahlquist, Germund
Date: April 1978
Abstract:
Linear multistep methods for the solution of the equation y" = f(t,y)
are studied by means of the test equation y" = -$\omega^2$y, with
$\omega$ real. It is shown that the order of accuracy cannot exceed 2
for an unconditionally stable method.
CS-TR-78-655
Report Number: CS-TR-78-657
Institution: Stanford University, Department of Computer Science
Title: On the model theory of knowledge
Author: McCarthy, John
Author: Sato, Masahiko
Author: Hayashi, Takeshi
Author: Igarashi, Shigeru
Date: April 1978
Abstract:
Another language for expressing "knowing that" is given together with
axioms and rules of inference and a Kripke type semantics. The
formalism is extended to time-dependent knowledge. Completeness and
decidability theorems are given. The problem of the wise men with spots
on their foreheads and the problem of the unfaithful wives are
expressed in the formalism and solved.
CS-TR-78-657
Report Number: CS-TR-78-661
Institution: Stanford University, Department of Computer Science
Title: Variations of a pebble game on graphs
Author: Gilbert, John R.
Author: Tarjan, Robert Endre
Date: September 1978
Abstract:
We examine two variations of a one-person pebble game played on
directed graphs, which has been studied as a model of register
allocation. The black-white pebble game of Cook and Sethi is shown to
require as many pebbles in the worst case as the normal pebble game, to
within a constant factor. For another version of the pebble game, the
problem of deciding whether a given number of pebbles is sufficient for
a given graph is shown to be complete in polynomial space.
CS-TR-78-661
Report Number: CS-TR-78-662
Institution: Stanford University, Department of Computer Science
Title: New algorithms in bin packing
Author: Yao, Andrew Chi-Chih
Date: September 1978
Abstract:
In the bin-packing problem a list L of n numbers are to be packed into
unit-capacity bins. For any algorithm S, let r(S) be the maximum ratio
S(L)/$L^*$ for large $L^*$, where S(L) denotes the number of bins used
by S and $L^*$ denotes the minimum number needed. In this paper we give
an on-line O(n log n)-time algorithm RFF with r(RFF) = 5/3, and an
off-line polynomial-time algorithm RFFD with r(RFFD) =
(11/9)-$\epsilon$ for some fixed $\epsilon$ > 0. These are strictly
better respectively than two prominent algorithms -- the First-Fit (FF)
which is on-line with r(FF) = 17/10, and the First-Fit-Decreasing (FFD)
with r(FFD) = 11/9. Furthermore, it is shown that any on-line algorithm
S must have r(S) $\geq$ 3/2. We also discuss the question "how well can
an O(n)-time algorithm perform?", showing that, in the generalized
d-dimensional bin-packing, any O(n)-time algorithm S must have r(S)
$\geq$ d.
CS-TR-78-662
Report Number: CS-TR-78-663
Institution: Stanford University, Department of Computer Science
Title: Software restyling in graphics and programming languages
Author: Grosse, Eric H.
Date: September 1978
Abstract:
The value of large software products can be cheaply increased by adding
restyled interfaces that attract new users. As examples of this
approach, a set of graphics primitives and a language precompiler for
scientific computation are described. These two systems include a
general user-defined coordinate system instead of numerous system
settings, indention to specify block structure, a modified indexing
convention for array parameters, a syntax for n-and-a-half-times-'round
loops, and engineering format for real constants; most of all, they
strive to be as small as possible.
CS-TR-78-663
Report Number: CS-TR-78-665
Institution: Stanford University, Department of Computer Science
Title: SCALD: Structured Computer-Aided Logic Design
Author: McWilliams, Thomas M.
Author: Widdoes, Lawrence C., Jr.
Date: March 1978
Abstract:
SCALD, a graphics-based hierarchical digital logic design system, is
described and an example of its use is given. SCALD provides a total
computer-aided design environment which inputs a high-level description
of a digital system, and produces output for computer-aided manufacture
of the system. SCALD has been used in the design of an operational,
15-MIPS, 5500-chip ECL-10k processor.
CS-TR-78-665
Report Number: CS-TR-78-666
Institution: Stanford University, Department of Computer Science
Title: The SCALD physical design subsystem
Author: McWilliams, Thomas M.
Author: Widdoes, Lawrence C., Jr.
Date: March 1978
Abstract:
The SCALD physical design subsystem is described. SCALD supports the
automatic construction of ECL-10k logic on wire wrap cards from the
output of a hierarhical design system. Results of its use in the design
of an operational 15-MIPS 5500-chip processor are presented and
discussed.
CS-TR-78-666
Report Number: CS-TR-78-668
Institution: Stanford University, Department of Computer Science
Title: BAOBAB, a parser for a rule-based system using a semantic grammar
Author: Bonnet, Alain
Date: September 1978
Abstract:
Until a recent knowledge-based system is able to learn by itself, it
must acquire new knowledge and new heuristics from human experts. This
is traditionally done with the aid of a computer programmer acting as
intermediary. The direct transfer of knowledge from an expert to the
system requires a natural-language processor capable of handling a
substantial subset of English. The development of such a
natural-language processor is a long-term goal of automating knowledge
acquisition; facilitating the interface between the expert and the
system is a first step toward this goal. This paper descrtbes BAOBAB, a
program designed and implemented for MYCIN (Shortliffe 1974), a medical
consultation system for infectious disease diagnosis and therapy
selection. BAOBAB is concerned with the problem of parsing -
recognizing natural language sentences and encoding them into MYClN's
internal representation. For this purpose, it uses a semantic grammar
in which the non-terminal symbols denote semantic categories (e.g.,
infections and symptoms), or conceptual categorles whlch are common
tools of knowledge representation in artificial intelligence (e.g.,
attributes, objects, values and predicate functions). This differs from
a syntactic grammar in which non-terminal symbols are syntactic
elements such as nouns or verbs.
CS-TR-78-668
Report Number: CS-TR-78-670
Institution: Stanford University, Department of Computer Science
Title: Information bounds are weak in the shortest distance problem
Author: Graham, Ronald L.
Author: Yao, Andrew C.
Author: Yao, F. Frances
Date: September 1978
Abstract:
In the all-pair shortest distance problem, one computes the matrix D =
($d_{ij}$) where $d_{ij}$ is the minimum weighted length of any path
from vertex i to vertex j in a directed complete graph with a weight on
each edge. In all the known algorithms, a shortest path $p_{ij}$
achieving $d_{ij}$ is also implicitly computed. In fact, $\log_3$ f(n)
is an information-theoretic lower bound where f(n) is the total number
of distinct patterns ($p_{ij}$) for n-vertex graphs. As f(n)
potentially can be as large as $2^{n^3}$, it is hopeful that a
non-trivial lower bound can be derived this way in the decision tree
model. We study the characterization and enumeration of realizable
patterns, and show that f(n) $\leq C^{n^2}$. Thus no lower bound
greater than C$n^2$ can be derived from this approach. We prove as a
corollary that the Triangular polyhedron $T^{(n)}$, defined in
$E^{(n\choose 2)}$ by $d_{ij} \geq 0$ and the triangle inequalities
$d_{ij} + d_{jk} \geq d_{ik}$, has at most $C^{n^2}$ faces of all
dimensions, thus resolving an open question in a similar information
bound approach to the shortest distance problem.
CS-TR-78-670
Report Number: CS-TR-78-673
Institution: Stanford University, Department of Computer Science
Title: A numerical library and its support
Author: Chan, Tony F.
Author: Coughran, William M., Jr.
Author: Grosse, Eric H.
Author: Heath, Michael T.
Date: November 1978
Abstract:
Reflecting on four years of numerical consulting at the Stanford Linear
Accelerator Center, we point out solved and outstanding problems in
selecting and installing mathematical software, helping users,
maintaining the library and monitoring its use, and managing the
consulting operation.
CS-TR-78-673
Report Number: CS-TR-78-674
Institution: Stanford University, Department of Computer Science
Title: Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations
Author: Chan, Tony F.
Author: Glowinski, Roland
Date: November 1978
Abstract:
We describe in this report the numerical analysis of a particular class
of nonlinear Dirichlet problems. We consider an equivalent variational
inequality formulation on which the problems of existence, uniqueness
and approximation are easier to discuss. We prove in particular the
convergence of an approximation by piecewise linear finite elements.
Finally, we describe and compare several iterative methods for solving
the approximate problems and particularly some new algorithms of
augmented lagrangian type, which contain as special case some
well-known alternating direction methods. Numerical results are
presented.
CS-TR-78-674
Report Number: CS-TR-78-677
Institution: Stanford University, Department of Computer Science
Title: Comprehensive Examinations 1972-1978
Author: The faculty and students of the Stanford University Computer Science Department
Edited by Frank M. Liang
Date: November 1978
Abstract: Since Spring 1972, the Stanford Computer Science Department has periodically
given a "comprehensive examination" as one of the qualifying exams for graduate students. Such
exams generally have consisted of a six-hour written test followed by a several-day programming problem.
Their intent is to make it possible to assess whether a student is sufficiently prepared in all the
important aspects of computer science. This report presents the examination questions
from thirten comprehensive examinations, along with their solutions..
CS-TR-78-677.pdf
Report Number: CS-TR-78-678
Institution: Stanford University, Department of Computer Science
Title: Reasoning about recursively defined data structures
Author: Oppen, Derek C.
Date: July 1978
Abstract:
A decision algorithm is given for the quantifier-free theory of
recursively defined data structures which, for a conjunction of length
n, decides its satisfiability in time linear in n. The first-order
theory of recursively defined data structures, in particular the
first-order theory of LISP list structure (the theory of CONS, CAR and
CDR), is shown to be decidable but not elementary recursive.
CS-TR-78-678
Report Number: CS-TR-78-679
Institution: Stanford University, Department of Computer Science
Title: Steplength algorithms for minimizing a class of nondifferentiable functions
Author: Murray, Walter
Author: Overton, Michael L.
Date: November 1978
Abstract:
Four steplength algorithms are presented for minimizing a class of
nondifferentiable functions which includes functions arising from
$\ell_1$ and $\ell_\infty$ approximation problems and penalty functions
arising from constrained optimization problems. Two algorithms are
given for the case when derivatives are available wherever they exist
and two for the case when they are not available. We take the view that
although a simple steplength algorithm may be all that is required to
meet convergence criteria for the overall algorithm, from the point of
view of efficiency it is important that the step achieve as large a
reduction in the function value as possible, given a certain limit on
the effort to be expended. The algorithms include the facility for
varying this limit, producing anything from an algorithm requiring a
single function evaluation to one doing an exact linear search. They
are based on univariate minimization algorithms which we present first.
These are normally at least quadratically convergent when derivatives
are used and superlinearly convergent otherwise, regardless of whether
or not the function is differentiable at the minimum.
CS-TR-78-679
Report Number: CS-TR-78-680
Institution: Stanford University, Department of Computer Science
Title: Bibliography of Stanford Computer Science reports, 1963-1978
Author: Stanley, Connie J.
Date: November 1978
Abstract:
This report lists, in chronological order, all reports published by the
Stanford Computer Science Department since 1963. Each report is
identified by Computer Science number, author's name, title, National
Technical Information Service (NTIS) retrieval number, date, and number
of pages. Complete listings of Theses, Artificial Intelligence Memos,
and Heuristic Programming Reports are given in the Appendix. Also, for
the first time, each report has been marked as to its availability for
ordering and the cost if applicable.
CS-TR-78-680
Report Number: CS-TR-78-683
Institution: Stanford University, Department of Computer Science
Title: Storing a sparse table
Author: Tarjan, Robert Endre
Date: December 1978
Abstract:
The problem of storing and searching large sparse tables arises in
compiling and in other areas of computer science. The standard
technique for storing such tables is hashing, but hashing has poor
worst-case performance. We consider good worst-case methods for storing
a table of n entries, each an integer between 0 and N-1. For dynamic
tables, in which look-ups and table additions are intermixed, the use
of a trie requires O(kn) storage and allows O($\log_k$(N/n)) worst-case
access time, where k is an arbitrary parameter. For static tables, in
which the entire table is constructed before any look-ups are made, we
propose a method which requires O(n $log^{(\ell)}$ n) storage and
allows O($\ell \log_n N$) access time, where $\ell$ is an arbitrary
parameter. Choosing $\ell$ = $\log^* n$ gives a method with O(n)
storage and O(($\log^* n$)($\log_n N$)) access time.
CS-TR-78-683
Report Number: CS-TR-78-684
Institution: Stanford University, Department of Computer Science
Title: The matrix inverse eigenvalue problem for periodic Jacobi matrices
Author: Boley, Daniel L.
Author: Golub, Gene H.
Date: December 1978
Abstract:
A stable numerical algorithm is presented for generating a periodic
Jacobi matrix from two sets of eigenvalues and the product of the
off-diagonal elements of the matrix. The algorithm requires a simple
generalization of the Lanczos algorithm. It is shown that the matrix is
not unique, but the algorithm will generate all possible solutions.
CS-TR-78-684
Report Number: CS-TR-78-687
Institution: Stanford University, Department of Computer Science
Title: Prolegomena to a theory of formal reasoning
Author: Weyhrauch, Richard W.
Date: December 1978
Abstract:
This paper is an introduction to the mechanization of a theory of
reasoning. Currently formal systems are out of favor with the AI
community. The aim of this paper is to explain how formal systems can
be used in AI by explaining how traditional ideas of logic can be
mechanized in a practical way. The paper presents several new ideas.
Each of these is illustrated by giving simple examples of how this idea
is mechanized in the reasoning system FOL. That is, this is not just
theory but there is an existing running implementation of these ideas.
In this paper: 1) we show how to mechanize the notion of model using
the idea of a simulation structure and explain why this is particularly
important to AI, 2) we show how to mechanize the notion of
satisfaction, 3) we present a very general evaluator for first order
expressions, which subsumes PROLOG and we propose as a natural way of
thinking about logic programming, 4) we show how to formalize
metatheory, 5) we describe reflection principles, which connect
theories to their metatheories in a way new to AI, 6) we show how these
ideas can be used to dynamically extend the strength of FOL by
"implementing" subsidiary deduction rules, and how this in turn can be
extended to provide a method of describing and proving theorems about
heuristics for using these rules, 7) we discuss one notion of what it
could mean for a computer to learn and give an example, 8) we describe
a new kind of formal system that has the property that it can reason
about its own properties, 9) we give examples of all of the above.
CS-TR-78-687
Report Number: CS-TR-78-689
Institution: Stanford University, Department of Computer Science
Title: An $n^{log n}$ algorithm for the two-variable-per-constraint linear programming satisfiability problem
Author: Nelson, Charles Gregory
Date: November 1978
Abstract:
A simple algorithm is described which determines the satisfiability
over the reals of a conjunction of linear inequalities, none of which
contains more than two variables. In the worst case the algorithm
requires time O(${mn}^{\lceil \log^2 n \rceil + 3}$), where n is the
number of variables and m the number of inequalities. Several
considerations suggest that the algorithm may be useful in practice: it
is simple to implement, it is fast for some important special cases,
and if the inequalities are satisfiable it provides valuable
information about their so1ution set. The algorithm is particularly
suited to applications in mechanical program verification.
CS-TR-78-689
Report Number: CS-TR-78-690
Institution: Stanford University, Department of Computer Science
Title: A deductive approach to program synthesis
Author: Manna, Z ohar
Author: Waldinger, Richard J.
Date: November 1978
Abstract:
Program synthesis is the systematic derivation of a program from a
given specification. A deductive approach to program synthesis is
presented for the construction of recursive programs. This approach
regards program synthesis as a theorem-proving task and relies on a
theorem-proving method that combines the features of transformation
rules, unification, and mathematical induction within a single
framework.
CS-TR-78-690
Report Number: CS-TR-78-693
Institution: Stanford University, Department of Computer Science
Title: A class of solutions to the gossip problem
Author: West, Douglas B.
Date: November 1978
Abstract:
We characterize and count optimal solutions to the gossip problem in
which no one hears his own information. That is, we consider graphs
with n vertices where the edges have a linear ordering such that an
increasing path exists from each vertex to every other, but there is no
increasing path from any vertex to itself. Such graphs exist only when
n is even, in which case the fewest number of edges is 2n-4, as in the
original gossip problem. We characterize optimal solutions of this sort
(NOHO-graphs) using a correspondence with a set of permutations and
binary sequences. This correspondence enables us to count these
solutions and several subclasses of solutions. The numbers of solutions
in each class are simple powers of 2 and 3, with exponents determined
by n. We also show constructively that NOHO-graphs are planar and
Hamiltonian, and we mention applications to related problems.
CS-TR-78-693
Report Number: CS-TR-78-694
Institution: Stanford University, Department of Computer Science
Title: Computer science at Stanford, 1977-1978
Author: King, Jonathan J.
Date: November 1978
Abstract:
This is a review of research and teaching in the Stanford Computer
Science Department during the 1977-1978 academic year.
CS-TR-78-694
Report Number: CS-TR-78-697
Institution: Stanford University, Department of Computer Science
Title: On the linear least squares problem with a quadratic constraint
Author: Gander, Walter
Date: November 1978
Abstract:
In this paper we present the theory and practical computational aspects
of the linear least squares problem with a quadratic constraint. New
theorems characterizing properties of the solutions are given and
extended for the problem of minimizing a general quadratic function
subject to a quadratic constraint. For two important regularization
methods we formulate dual equations which proved to be very useful for
the applications of smoothing of datas. The resulting algorithm is a
numerically stable version of an algorithm proposed by Rutishauser. We
show also how to choose a third order iteration method to solve the
secular equations. However we are still far away from a foolproof
machine independent algorithm.
CS-TR-78-697
Report Number: CS-TR-78-699
Institution: Stanford University, Department of Computer Science
Title: SACON: a knowledge-based consultant for structural analysis
Author: Bennett, James
Author: Creary, Lewis
Author: Engelmore, Robert S.
Author: Melosh, Robert
Date: September 1978
Abstract:
In this report we describe an application of artificial intelligence
(AI) methods to structural analysis. We describe the development and
(partial) implementation of an "automated consultant" to advise
non-expert engineers in the use of a general-purpose structural
analysis program. The analysis program numerically simulates the
behavior of a physical structure subjected to various mechanical
loading conditions. The automated consultant, called SACON (Structural
Analysis CONsultant), is based on a version of the MYCIN program
[Shortliffe, 1974], originally developed to advise physicians in the
diagnosis and treatment of infectious diseases. The domain-specific
knowledge in MYCIN is represented as situation-action rules, and is
kept independent of the "inference engine" that uses the rules. By
substituting structural engineering knowledge for the medical
knowledge, the program was converted easily from the domain of
infectious diseases to the domain of structural analysis.
CS-TR-78-699
Report Number: CS-TR-78-702
Institution: Stanford University, Department of Computer Science
Title: An O($n\cdot I \log^2 I$) maximum-flow algorithm
Author: Shiloach, Yossi
Date: December 1978
Abstract:
We present in this paper a new algorithm to find a maximum flow in a
flow-network which has n vertices and m edges in time of O($n\cdot I
\log^2 I$), where I = M+n is the input size (up to a constant factor).
This result improves the previous upper bound of Z . Galil [1978] which
was O($I^{7/3}$) in the worst case.
CS-TR-78-702
Report Number: CS-TR-78-709
Institution: Stanford University, Department of Computer Science
Title: Design and analysis of a data structure for representing sorted lists
Author: Brown, Mark R.
Author: Tarjan, Robert E.
Date: December 1978
Abstract:
In this paper we explore the use of 2-3 trees to represent sorted
lists. We analyze the worst-case cost of sequences of insertions and
deletions in 2-3 trees under each of the following three assumptions:
(i) only insertions are performed; (ii) only deletions are performed;
(iii) deletions occur only at the small end of the list and insertions
occur only away from the small end. Our analysis leads to a data
structure for representing sorted lists when the access pattern
exhibits a (perhaps time-varying) locality of reference. This structure
has many of the properties of the representation proposed by Guibas,
McCreight, Plass, and Roberts [1977], but it is substantially simpler
and may be practical for lists of moderate size.
CS-TR-78-709
Report Number: CS-TR-79-703
Institution: Stanford University, Department of Computer Science
Title: A polynomial time algorithm for solving systems of linear inequalities with two variables per inequality
Author: Aspvall, Bengt
Author: Shiloach, Yossi
Date: January 1979
Abstract:
We present a constructive algorithm for solving systems of linear
inequalities (LI) with at most two variables per inequality. The
algorithm is polynomial in the size of the input. The LI problem is of
importance in complexity theory since it is polynomial time equivalent
to linear programming. The subclass of LI treated in this paper is also
of practical interest in mechanical verification systems, and we
believe that the ideas presented can be extended to the general LI
problem.
CS-TR-79-703
Report Number: CS-TR-79-704
Institution: Stanford University, Department of Computer Science
Title: A survey of the state of software for partial differential equations
Author: Sweet, Roland A.
Date: January 1979
Abstract:
This paper surveys the state of general purpose software for the
solution of partial differential equations. A discussion of the
purported capabilities of twenty-one programs is presented. No testing
of the routines was performed.
CS-TR-79-704
Report Number: CS-TR-79-706
Institution: Stanford University, Department of Computer Science
Title: Graph 2-isomorphism is NP-complete
Author: Yao, F. Francis
Date: January 1979
Abstract:
Two graphs G and G' are said to be k-isomorphic if their edge sets can
be partitioned into E(G) = $E_1 \cup E_2 \cup ... \cup E_k$ and E(G') =
${E'}_1 \cup {E'}_2 \cup ... \cup {E'}_k$ such that as graphs, $E_i$
amd ${E'}_i$ are isomorphic for $1 \leq i \leq k$. In this note we show
that it is NP-complete to decide whether two graphs are 2-isomorphic.
CS-TR-79-706
Report Number: CS-TR-79-707
Institution: Stanford University, Department of Computer Science
Title: A programming and problem-solving seminar
Author: Van Wyk, Christopher J.
Author: Knuth, Donald E.
Date: January 1979
Abstract:
This report contains edited transcripts of the discussions held in
Stanford's course CS 204, Problem Seminar, during autumn quarter 1978.
Since the topics span a large range of ideas in computer science, and
since most of the important research paradigms and programming
paradigms came up during the discussions, these notes may be of
interest to graduate students of computer science at other
universities, as well as to their professors and to professional people
in the "real world."
CS-TR-79-707
Report Number: CS-TR-79-708
Institution: Stanford University, Department of Computer Science
Title: An analysis of a memory allocation scheme for implementing stacks
Author: Yao, Andrew C.
Date: January 1979
Abstract:
Consider the implementation of two stacks by letting them grow towards
each other in a table of size m . Suppose a random sequence of
insertions and deletions are executed, with each instruction having a
fixed probability p (0 < p < 1/2) to be a deletion. Let $A_p (m)
denote the expected value of max{x,y}, where x and y are the stack
heights when the table first becomes full. We shall prove that, as $m
\rightarrow \infty$, $A_p (m) = \sqrt{m/(2 \pi (1-2p))} + O((log m)/
\sqrt{m})$. This gives a solution to an open problem in Knuth ["The Art
of Computer Programming, Vol. 1, Exercise 2.2.2-13].
CS-TR-79-708
Report Number: CS-TR-79-710
Institution: Stanford University, Department of Computer Science
Title: Numerical computation of the Schwarz-Christoffel transformation
Author: Trefethen, Lloyd N.
Date: March 1979
Abstract:
A program is described which computes Schwarz-Christoffel
transformations that map the unit disk conformally onto the interior of
a bounded or unbouded polygon in the complex plane. The inverse map is
also computed. The computational problem is approached by setting up a
nonlinear system of equations whose unknowns are essentially the
"accessory parameters" $z_k$. This system is then solved with a
packaged subroutine. New features of this work include the evaluation
of integrals within the disk rather than along the boundary, making
possible the treatment of unbounded polygons; the use of a compound
form of Gauss-Jacobi quadrature to evaluate the Schwarz-Christoffel
integral, making possible high accuracy at reasonable cost; and the
elimination of constraints in the nonlinear system by a simple change
of variables. Schwarz-Christoffel transformations may be applied to
solve the Laplace and Poisson equations and related problems in
two-dimensional domains with irregular or unbounded (but not curved or
multiply connected) geometries. Computational examples are presented.
The time required to solve the mapping problem is roughly proportional
to $N^3$, where N is the number of vertices of the polygon. A typical
set of computations to 8-place accuracy with $N \leq 10$ takes 1 to 10
seconds on an IBM 370/168.
CS-TR-79-710
Report Number: CS-TR-79-712
Institution: Stanford University, Department of Computer Science
Title: The errata of computer programming
Author: Knuth, Donald E.
Date: January 1979
Abstract:
This report lists all corrections and changes of Volumes 1 and 3 of
"The Art of Computer Programming," as of January 5, 1979. This updates
the previous list in report CS551, May 1976. The second edition of
Volume 2 has been delayed two years due to the fact that it was
completely revised and put into the TEX typesetting language; since
publication of this new edition is not far off, no changes to Volume 2
are listed here.
CS-TR-79-712
Report Number: CS-TR-79-714
Institution: Stanford University, Department of Computer Science
Title: PCFORT: a Fortran-to-Pcode translator
Author: Castaneda, Fernando
Author: Chow, Frederick C.
Author: Nye, Peter
Author: Sleator, Daniel D.
Author: Wiederhold, Gio
Date: January 1979
Abstract:
PCFORT is a compiler for the FORTRAN language designed to fit as a
building block into a PASCAL oriented environment. It forms part of the
programming systems being developed for the S-1 multiprocessor. It is
written in PASCAL, and generates P-code, an intermediate language used
by transportable PASCAL compilers to represent the program in a simple
form. P-code is either compiled or interpreter depending upon the
objectives of the programming system. A PASCAL written FORTRAN compiler
provides a bridge between the FORTRAN and PASCAL communities. The
implementation allows PASCAL and FORTRAN generated code to be combined
into one program. The FORTRAN language supported here is FORTRAN to the
full 1966 standard, extended with those features commonly expected by
available large scientific programs.
CS-TR-79-714
Report Number: CS-TR-79-715
Institution: Stanford University, Department of Computer Science
Title: S-1 architecture manual
Author: Hailpern, Brent T.
Author: Hitson, Bruce L.
Date: January 1979
Abstract:
This manual provides a complete description of the instruction-set
architecture of the S-1 Uniprocessor (Mark IIA), exclusive of vector
operations. It is assumed that the reader has a general knowledge of
computer architecture. The manual was designed to be both a detailed
introduction to the S-1 and an architecture reference manual. Also
included are user manuals for the FASM Assembler and the S-1 Formal
Description Syntax.
CS-TR-79-715
Report Number: CS-TR-79-716
Institution: Stanford University, Department of Computer Science
Title: A framework for control in production systems
Author: Georgeff, Michael P.
Date: January 1979
Abstract:
A formal model for representing control in production systems is
defined. The formalism allows control to be directly specified
independently of the conflict resolution scheme, and thus allows the
issues of control and nondeterminism to be treated separately. Unlike
previous approaches, it allows control to be examined within a uniform
and consistent framework. It is shown that the formalism provides a
basis for implementing control constructs which, unlike existing
schemes, retain all the properties desired of a knowledge based system
--- modularity, flexibility, extensibility and explanatory capacity.
Most importantly, it is shown that these properties are not a function
of the lack of control constraints, but of the type of information
allowed to establish these constraints. Within the formalism it is also
possible to provide a meaningful notion of the power of control
constructs. This enables the types of control required in production
systems to be examined and the capacity of various schemes to meet
these requirements to be determined. Schemes for improving system
efficiency and resolving nondeterminism are examined, and devices for
representing such meta-level knowledge are described. In particular,
the objectification of control information is shown to provide a better
paradigm for problem solving and for talking about problem solving. It
is also shown that the notion of control provides a basis for a theory
of transformation of production systems, and that this provides a
uniform and consistent approach to problems involving subgoal
protection.
CS-TR-79-716
Report Number: CS-TR-79-718
Institution: Stanford University, Department of Computer Science
Title: AL users' manual
Author: Mujtaba, Mohamed Shahid
Author: Goldman, Ron
Date: January 1979
Abstract:
This document describes the current state of the AL system now in
operation at the Stanford Artificial Intelligence Laboratory, and
teaches the reader how to use it. The system consists of AL, a
high-level programming language for manipulator control useful in
industrial assembly research; POINTY, an interactive system for
specifying representation of parts; and ALAID, an interactive debugger
for AL.
CS-TR-79-718
Report Number: CS-TR-79-719
Institution: Stanford University, Department of Computer Science
Title: Extrapolation of asymptotic expansions by a modified Aitken $\delta^2$-formula
Author: Bjorstad, Petter
Author: Dahlquist, Germund
Author: Grosse, Eric H.
Date: March 1979
Abstract:
A modified Aitken formula permits iterated extrapolations to
efficiently estimate $s_\infty$ from $s_n$ when an asymptotic expansion
$s_n = s_\infty + n^{-k} (c_0 + c_1 n^{-1} + c_2 n^{-2} + ... )$ holds
for some (unknown) coefficients $c_j$. We study the truncation and
irregular error and compare the method with other forms of
extrapolation.
CS-TR-79-719
Report Number: CS-TR-79-720
Institution: Stanford University, Department of Computer Science
Title: On grid optimization for boundary value problems
Author: Glowinski, Roland
Date: February 1979
Abstract:
We discuss in this report the numerical procedures which can be used to
obtain the optimal grid when solving by a finite element method a model
boundary value problem of elliptic type modelling the potential flow of
an incompressible inviscid fluid. Results of numerical experiments are
presented.
CS-TR-79-720
Report Number: CS-TR-79-721
Institution: Stanford University, Department of Computer Science
Title: On fault-tolerant networks for sorting
Author: Yao, Andrew C.
Author: Yao, F. Frances
Date: February 1979
Abstract:
The study of constructing reliable systems from unreliable components
goes back to the work of von Neumann, and of Moore and Shannon. The
present paper studies the use of redundancy to enhance reliability for
sorting and related networks built from unreliable comparators. Two
models of fault-tolerant networks are discussed. The first model
patterns after the concept of error-correcting codes in information
theory, and the other follows the stochastic criterion used by von
Neumann and Moore-Shannon. It is shown, for example, that an additional
k(2n-3) comparators are sufficient to render a sorting network
reliable, provided that no more than k of its comparators may be faulty.
CS-TR-79-721
Report Number: CS-TR-79-722
Institution: Stanford University, Department of Computer Science
Title: A structural model for database systems
Author: Wiederhold, Gio
Author: El-Masri, Ramez A.
Date: February 1979
Abstract:
This report presents a model to be used for database design. Because
our motivation extends to providing guidance for the structured
implementation of a database, we call our model the 'Structural Model.'
We derive the design using criteria of correctness, relevance, and
performance from semantic and operational specifications obtained from
multiple sources. These sources typically correspond to prospective
users or user groups of the database. The integration of such
specifications is a central issue in the development of an integrated
structural database model. The structural model is used for the design
of the logical structures that represent a real-world situation.
However, it is not meant to represent all possible real-world
semantics, but a subset of the semantics which are important in
database modelling. The model uses relations as building blocks, and
hence can be considered as an extension of Codd's relational model
[1970]. The main extensions to the relational model are the explicit
representation of logical connections between relations, the inclusion
of insertion-deletion constraints in the model itself, and the
separation of relations into several structural types. Connections
between relations are used to represent existence dependencies of
tuples in different relations. These existence dependencies are
important for the definition of semantics of relationships between
classes of real-world entities. The connections between relations are
used to specify these existence dependencies, and to ensure that they
remain valid when the database is updated. Hence, connections
implicitly define a basic, limited set of integrity constraints on the
database, those that identify and maintain existence dependencies among
tuples from different relations. Consequently, the rules for the
maintenance of the structural integrity of the model under insertion
and deletion of tuples are easy to specify. Structural relation types
are used to specify how each relation may be connected to other
relations in the model. Relations are classified into five types:
primary relations, referenced relations, nest relations, association
relations, and lexicon relations. The motivation behind the choice of
these relation types is discussed, as is their use in data model
design. A methodology for combining multiple, overlapping data models -
also called user views in the literature - is associated with the
structural model. The database model, or conceptual schema, which
represents the integrated database, may thus be derived from the
individual data models of the users. We believe that the structural
model can be used to represent the data relationships within the
conceptual schema of the ANSI/SPARC DBMS model since it can support
database submodels, also called external schema, and maintain the
integrity of the submodels with respect to the integrity constraints
expressable in the structural model. We then briefly discuss the use of
the structural model in database design and implementation. The
structural model provides a tool to deal effectively with the
complexityu of large, real-world databases. We begin this report with a
very short review of existing database models. In Chapter 2, we state
the purpose of the model, and in Chapter 3 we describe the structural
model, first informally and then using a formal framework based on
extensions of the relational model. Chapter 4 defines the
representations we use, and Chapter 5 covers the integration of data
models that represent the different user specifications into an
integrated database model. Formal descriptions and examples of the
prevalent cases are given. The work is then placed into context first
relative to other work (Chapter 6) and then briefly within our
methodology for database design (Chapter 7).
CS-TR-79-722
Report Number: CS-TR-79-726
Institution: Stanford University, Department of Computer Science
Title: An analysis of (h,k,l)-shellsort
Author: Yao, Andrew Chi-Chih
Date: March 1979
Abstract:
One classical sorting algorithm, whose performance in many cases
remains unanalyzed, is Shellsort. Let $\vec{h} be a t-component vector
of positive integers. An $\vec{h}$-Shellsort will sort any given n
elements in t passes, by means of comparisons and exchanges of
elements. Let $S_j$($\vec{h}$;n) denote the average number of element
exchanges in the j-th pass, assuming that all the n! initial orderings
are equally likely. In this paper we derive asymptotic formulas of
$S_j$($\vec{h}$;n) for any fixed $\vec{h}$ = (h,k,l), making use of a
new combinatorial interpretation of $S_3$. For the special case
$\vec{h}$ = (3,2,1), the analysis if further sharpened to yield exact
expressions.
CS-TR-79-726
Report Number: CS-TR-79-728
Institution: Stanford University, Department of Computer Science
Title: Union-member algorithms for non-disjoint sets
Author: Shiloach, Yossi
Date: January 1979
Abstract:
In this paper we deal with the following problem. We are given a finite
set U = {$u_1$,...,$u_n$} and a set [cursive capital 'S'] =
{$S_1$,...,$S_m$} of subsets of U. We are also given m-1 UNION
instructions that have the form UNION($S_i$,$S_j$) and mean "add the
set $S_i \cup S_j$ to the collection and delete $S_i$ and $S_j$."
Interspaced among the UNIONs are MEMBER(i,j) questions that mean "does
$u_i$ belong to $S_j$?" We present two algorithms that exhibit the
trade-off among the three interesting parameters of this problem, which
are: 1. Time required to answer one membership question. 2. Time
required to perform the m-1 UNIONs altogether. 3. Space. We also give
an application of these algorithms to the problem of 5-coloring of
planar graphs.
CS-TR-79-728
Report Number: CS-TR-79-729
Institution: Stanford University, Department of Computer Science
Title: A unified approach to path problems
Author: Tarjan, Robert Endre
Date: April 1979
Abstract:
We describe a general method for solving path problems on directed
graphs. Such path problems include finding shortest paths, solving
sparse systems of linear equations, and carrying out global flow
analysis of computer programs. Our method consists of two steps. First,
we construct a collection of regular expressions representing sets of
paths in the graph. This can be done by using any standard algorithm,
such as Gaussian or Gauss-Jordan elimination. Next, we apply a natural
mapping from regular expressions into the given problem domain. We
exhibit the mappings required to find shortest paths, solve sparse
systems of linear equations, and carry out global flow analysis. Our
results provide a general-purpose algorithm for solving any path
problem, and show that the problem of constructing path expressions is
in some sense the most general path problem.
CS-TR-79-729
Report Number: CS-TR-79-730
Institution: Stanford University, Department of Computer Science
Title: Qualifying examinations in computer science, 1965-1978
Author: Liang, Frank M.
Date: April 1979
Abstract:
Since 1965, the Stanford Computer Science Department has periodically
given "qualifying examinations" as one of the requirements of its
graduate program. These examinations are given in each of six subareas
of computer science: Programming Languages and Systems, Artificial
Intelligence, Numerical Analysis, Computer Design, Theory of
Computation, and Analysis of Algorithms. This report presents the
questions from these examinations, and also the associated reading
lists.
CS-TR-79-730
Report Number: CS-TR-79-731
Institution: Stanford University, Department of Computer Science
Title: Stanford Pascal Verifier user manual
Author: Luckham, David C.
Author: German, Steven M.
Author: von Henke, Friedrich W.
Author: Karp, Richard A.
Author: Milne, P. W.
Author: Oppen, Derek C.
Author: Polak, Wolfgang
Author: Scherlis, William L.
Date: March 1979
Abstract:
The Stanford PASCAL verifier is an interactive program verification
system. It automates much of the work necessary to analyze a program
for consistency with its documentation, and to give a rigorous
mathematical proof of such consistency or to pin-point areas of
inconsistency. It has been shown to have applications as an aid to
programming, and to have potential for development as a new and useful
tool in the production of reliable software.
CS-TR-79-731
Report Number: CS-TR-79-732
Institution: Stanford University, Department of Computer Science
Title: Notes on introductory combinatorics
Author: Woods, Donald R.
Date: April 1979
Abstract:
In the spring of 1978, Professors George Polya and Robert Tarjan teamed
up to teach CS 150 - Introduction to Combinatorics. This report
consists primarily of the class notes and other handouts produced by
the author as teaching assistant for the course. Among the topics
covered are elementary subjects such as combinations and permutations,
mathematical tools such as generating functions and Polya's Theory of
Counting, and analyses of specific problems such as Ramsey Theory,
matchings, and Hamiltonian and Eulerian paths.
CS-TR-79-732
Report Number: CS-TR-79-733
Institution: Stanford University, Department of Computer Science
Title: A lower bound to finding convex hulls
Author: Yao, Andrew Chi-Chih
Date: April 1979
Abstract:
Given a set S of n distinct points {($x_i$,$y_i$) | 0 $\leq$ i < n},
the convex hull problem is to determine the vertices of the convex hull
H(S). All the known algorithms for solving this problem have a
worst-case running time of cn log n or higher, and employ only
quadratic tests, i.e., tests of the form f($x_0$, $y_0$, $x_1$,
$y_1$,...,$x_{n-1}$, $y_{n-1}$): 0 with f being any polynomial of
degree not exceeding 2. In this paper, we show that any algorithm in
the quadratic decision-tree model must make cn log n tests for some
input.
CS-TR-79-733
Report Number: CS-TR-79-734
Institution: Stanford University, Department of Computer Science
Title: Fast algorithms for solving path problems
Author: Tarjan, Robert Endre
Date: April 1979
Abstract:
Let G = (V,E) be a directed graph with a distinguished source vertex s.
The single-source path expression problem is to find, for each vertex
v, a regular expression P(s,v) which represents the set of all paths in
G from s to v. A solution to this problem can be used to solve shortest
path problems, solve sparse systems of linear equations, and carry out
global flow analysis. We describe a method to compute path expressions
by dividing G into components, computing path expressions on the
components by Gaussian elimination, and combining the solutions. This
method requires O(m $\alpha$(m,n)) time on a reducible flow graph,
where n is the number of vertices in G, m is the number of edges in G,
and $\alpha$ is a functional inverse of Ackermann's function. The
method makes use of an algorithm for evaluating functions defined on
paths in trees. A simplified version of the algorithm, which runs in
O(m log n) time on reducible flow graphs, is quite easy to implement
and efficient in practice.
CS-TR-79-734
Report Number: CS-TR-79-735
Institution: Stanford University, Department of Computer Science
Title: Kronecker's canonical form and the QZ algorithm
Author: Wilkinson, James Hardy
Date: April 1979
Abstract:
In the QZ algorithm the eigenvalues of Ax = $\lambda$Bx are computed
via a reduction to the form $\tilde{A}$x = $\lambda \tilde{B}$x where
$\tilde{A}$ and $\tilde{B}$ are upper triangular. The eigenvalues are
given by ${\lambda}_i$ = $a_{ii}$/$b_{ii}$. It is shown that when the
pencil $\tilde{A}$ - $\lambda \tilde{B}$ is singular or nearly singular
a value of ${\lambda}_i$ may have no significance even when
$\tilde{a}_{ii}$ and $\tilde{b}_{ii}$ are of full size.
CS-TR-79-735
Report Number: CS-TR-79-736
Institution: Stanford University, Department of Computer Science
Title: Note on the practical significance of the Drazin inverse
Author: Wilkinson, James Hardy
Date: April 1979
Abstract:
The solution of the differential system Bx = Ax + f where A and B are n
x n matrices and A - $\lambda$B is not a singular pencil may be
expressed in terms of the Drazin inverse. It is shown that there is a
simple reduced form for the pencil A - $\lambda$B which is adequate for
the determination of the general solution and that although the Drazin
inverse could be determined efficiently from this reduced form it is
inadvisable to do so.
CS-TR-79-736
Report Number: CS-TR-79-737
Institution: Stanford University, Department of Computer Science
Title: On the average-case complexity of selecting the k-th best
Author: Yao, Andrew C.
Author: Yao, F. Frances
Date: April 1979
Abstract:
Let ${\bar{V}}_k$(n) be the minimum average number of pairwise
comparisons needed to find the k-th largest of n numbers (k $\leq$ 2),
assuming that all n! orderings are equally likely. D. W. Matula proved
that, for some absolute constant c, ${\bar{V}}_k$(n)-n $\leq$ c k log
log n as n $\rightarrow \infty$. In the present paper, we show that
there exists an absolute constant c' > 0 such that
${\bar{V}}_k$(n)-n $\leq$ c' k log log n as n $\rightarrow \infty$,
proving a conjecture of Matula.
CS-TR-79-737
Report Number: CS-TR-79-738
Institution: Stanford University, Department of Computer Science
Title: Computations related to G-stability of linear multistep methods
Author: LeVeque, Randall J.
Author: Dahlquist, Germund
Author: Andree, Dan
Date: May 1979
Abstract:
In Dahlquist's recent proof of the equivalence of A-stability and
G-stability, an algorithm was presented for calculating a G-stability
matrix for any A-stable linear multistep method. Such matrices, and
various quantities computable from them, are useful in many aspects of
the study of the stability of a given method. For example, information
may be gained as to the shape of the stability region, or the rate of
growth of unstable solutions. We present a summary of the relevant
theory and the results of some numerical calculations performed for
several backward differentiation, Adams-Bashforth, and Adams-Moulton
methods of low order.
CS-TR-79-738
Report Number: CS-TR-79-739
Institution: Stanford University, Department of Computer Science
Title: Induction over large data bases
Author: Quinlan, J. R.
Date: May 1979
Abstract:
Techniques for discovering rules by induction from large collections of
instances are developed. These are based on an iterative scheme for
dividing the instances into two sets, only one of which needs to be
randomly accessible. These techniques have made it possible to discover
complex rules from data bases containing many thousands of instances.
Results of several experiments using them are reported.
CS-TR-79-739
Report Number: CS-TR-79-740
Institution: Stanford University, Department of Computer Science
Title: The logic of aliasing
Author: Cartwright, Robert
Author: Oppen, Derek C.
Date: September 1979
Abstract:
We give a new version of Hoare's logic which correctly handles programs
with aliased variables. The central proof rules of the logic (procedure
call and assignment) are proved sound and complete.
CS-TR-79-740
Report Number: CS-TR-79-748
Institution: Stanford University, Department of Computer Science
Title: Fast algorithms for solving Toeplitz systems of equations and finding rational Hermite interpolants
Author: Yun, David Y. Y.
Date: July 1979
Abstract:
We present a new algorithm that reduces the computation for solving a
Toeplitz system to O(n ${log}^2$ n) and automatically resolves all
degenerate cases of the past. Our fundamental results show that all
rational Hermite interpolants, including Pade approximants which is
intimately related to this solution process, can be computed fast by an
Euclidean algorithm. In this report we bring out all these
relationships with mathematical justifications and mention important
applications including decoding BCH codes.
CS-TR-79-748
Report Number: CS-TR-79-753
Institution: Stanford University, Department of Computer Science
Title: Should tables by sorted?
Author: Yao, Andrew Chi-Chih
Date: July 1979
Abstract:
We examine optimality questions in the following information retrieval
problem: Given a set S of n keys, store them so that queries of the
form "Is x $\in$ S?" can be answered quickly. It is shown that, in a
rather general model including al1 the commonly-used schemes, $\lceil$
lg(n+l) $\rceil$ probes to the table are needed in the worst case,
provided the key space is sufficiently large. The effects of smaller
key space and arbitrary encoding are also explored.
CS-TR-79-753
Report Number: CS-TR-79-759
Institution: Stanford University, Department of Computer Science
Title: Schema-shift strategies to understanding structured texts in natural language
Author: Bonnet, Alain
Date: August 1979
Abstract:
This report presents BAOBAB-2, a computer program built upon MYCIN
[Shortliffe, 1974] that is used for understanding medical summaries
describing the status of patients. Due both to the conventlonal way
physicians present medical problems in these summaries and the
constrained nature of medical jargon, these texts have a very strong
structure. BAOBAB-2 takes advantage of this structure by using a model
of this organization as a set of related schemas that facilitate the
interpretatlon of these texts. Structures of the schemas and their
relatlon to the surface structure are described. Issues relating to
selection and use of these schemas by the program during interpretation
of the summaries are discussed.
CS-TR-79-759
Report Number: CS-TR-79-760
Institution: Stanford University, Department of Computer Science
Title: Some monotonicity properties of partial orders
Author: Graham, Ronald L.
Author: Yao, Andrew C.
Author: Yao, F. Frances
Date: September 1979
Abstract:
A fundamental quantity which arises in the sorting of n numbers $a_1$,
$a_2$,..., $a_n$ is Pr($a_i$ < $a_j$ | P), the probability that
$a_i$ < $a_j$ assuming that all linear extensions of the partial
order P are equally likely. In this paper we establish various
properties of Pr($a_i$ < $a_j$ | P) and related quantities. In
particular, it is shown that Pr($a_i$ < $b_j$ | P') $\geq$ Pr($a_i$
< $b_j$ | P), if the partial order P consists of two disjoint
linearly ordered sets A = {$a_1$ < $a_2$ < ... < $a_m$}, B =
{$b_1$ < $b_2$ < ... < $b_n} and P' = P $\cup$ {any relations
of the form $a_k$ < $b_l$}. These inequalities have applications in
determining the complexity of certain sorting-like computations.
CS-TR-79-760
Report Number: CS-TR-79-761
Institution: Stanford University, Department of Computer Science
Title: Gossiping without duplicate transmissions
Author: West, Douglas B.
Date: August 1979
Abstract:
n people have distinct bits of information, which they communicate via
telephone calls in which they transmit everything they know. We require
that no one ever hear the same piece of information twice. In the case
4 divides n, n $\geq$ 8, we provide a construction that transmits all
information using only 9n/4-6 calls. Previous constructions used 1/2 n
log n calls.
CS-TR-79-761
Report Number: CS-TR-79-762
Institution: Stanford University, Department of Computer Science
Title: METAFONT: a system for alphabet design
Author: Knuth, Donald E.
Date: September 1979
Abstract:
This is the user's manual for METAFONT, a companion to the TEX
tyesetting system. The system makes it fairly easy to define high
quality fonts of type in a machine-independent manner; a user writes
"programs" in a new language developed for this purpose. By varying
parameters of a design, an unlimited number of typefaces can be
obtained from a single set of programs. The manual also sketches the
algorithms used by the system to draw the character shapes.
CS-TR-79-762
Report Number: CS-TR-79-763
Institution: Stanford University, Department of Computer Science
Title: A symmetric chain decomposition of L(4,n)
Author: West, Douglas B.
Date: August 1979
Abstract:
L(m,n) is the set of integer m-tuples ($a_1$,...,$a_m$) with $0\leq a_1
\leq ...\leq a_m \leq n$, ordered by $\underline{a} \leq \underline{b}$
when $a_i\leq b_i$ for all i. R. Stanley conjectured that L(m,n) is a
symmetric chain order for all (m,n). We verify this by construction for
m = 4.
CS-TR-79-763
Report Number: CS-TR-79-764
Institution: Stanford University, Department of Computer Science
Title: On the time-space tradeoff for sorting with linear queries
Author: Yao, Andrew Chi-Chih
Date: August 1979
Abstract:
Extending a result of Borodin, et al., we show that any branching
program using linear queries " $\sum_{i} {\lambda}_i {x_i}: c$ " to
sort n numbers $x_1$,$x_2$,...,$x_n$ must satisfy the time-space
tradeoff relation TS = $\Omega (n_2)$. The same relation is also shown
to be true for branching programs that use queries " min R = ? " where
R is any subset of {$x_1$,$x_2$,...,$x_n$}.
CS-TR-79-764
Report Number: CS-TR-79-765
Institution: Stanford University, Department of Computer Science
Title: Relation between the complexity and the probability of large numbers
Author: Gacs, Peter
Date: September 1979
Abstract:
H(x), the negative logarithm of the apriori probability M(x), is
Levin's variant of Kolmogorov's complexity of a natural number x. Let
$\alpha (n)$ be the minimum complexity of a number larger than n, s(n)
the logarithm of the apriori probability of obtaining a number larger
than n . It was known that $s(n) \leq\ \alpha (n) \leq\ s(n) + H(\lceil
s(n) \rceil )$. We show that the second estimate is in some sense sharp.
CS-TR-79-765
Report Number: CS-TR-79-767
Institution: Stanford University, Department of Computer Science
Title: On Stewart's singular value decomposition for partitioned orthogonal matrices
Author: Van Loan, Charles
Date: September 1979
Abstract:
A variant of the singular value decomposition for orthogonal matrices
due to G. W. Stewart is discussed. It is shown to be useful in the
analysis of (a) the total least squares problem, (b) the
Golub-Klema-Stewart subset selection algorithm, and (c) the algebraic
Riccati equation.
CS-TR-79-767
Report Number: CS-TR-79-770
Institution: Stanford University, Department of Computer Science
Title: Pretty printing
Author: Oppen, Derek C.
Date: October 1979
Abstract:
An algorithm for pretty printing is given. For an input stream of
length n and an output device with margin width m, the algorithm
requires time O(n) and space O(m). The algorithrn is described in terms
of two parallel processes; the first scans the input stream to
determine the space required to print logical blocks of tokens; the
second uses this information to decide where to break lines of text;
the two processes communicate by means of a buffer of size O(m). The
algorithm does not wait for the entire stream to be input, but begins
printing as soon as it has received a linefull of input. The algorithm
is easily implemented.
CS-TR-79-770
Report Number: CS-TR-79-773
Institution: Stanford University, Department of Computer Science
Title: Updating formulae and a pairwise algorithm for computing sample variances
Author: Chan, Tony F.
Author: Golub, Gene H.
Author: LeVeque, Randall J.
Date: November 1979
Abstract:
A general formula is presented for computing the simple variance for a
sample of size m + n given the means and variances for two subsamples
of sizes m and n. This formula is used in the construction of a
pairwise algorithm for computing the variance. Other applications are
discussed as well, including the use of updating formulae in a parallel
computing environnment. We present numerical results and rounding error
analyses for several numerical schemes.
CS-TR-79-773
Report Number: CS-TR-79-774
Institution: Stanford University, Department of Computer Science
Title: Large scale geodetic least squares adjustment by dissection and orthogonal decomposition
Author: Golub, Gene H.
Author: Plemmons, Robert J.
Date: November 1979
Abstract:
Very large scale matrix problems currently arise in the context of
accurately computing the coordinates of points on the surface of the
earth. Here geodesists adjust the approximate values of these
coordinates by computing least squares solutions to large sparse
systems of equations which result from relating the coordinates to
certain observations such as distances or angles between points. The
purpose of this paper is to suggest an alternative to the formation and
solution of the normal equations for these least squares adjustment
problems. In particular, it is shown how a block-orthogonal
decomposition method can be used in conjunction with a nested
dissection scheme to produce an algorithm for solving such problems
which combines efficient data management with numerical stability. As
an indication of the magnitude that these least squares adjustment
problems can sometimes attain, the forthcoming readjustment of the
North American Datum in 1983 by the National Geodetic Survey is
discussed. Here it becomes necessary to linearize and solve an
overdetermined system of approximately 6,000,000 equations in 400,000
unknowns - a truly large-scale matrix problem.
CS-TR-79-774
Report Number: CS-TR-79-775
Institution: Stanford University, Department of Computer Science
Title: The analysis of sequential experiments with feedback to subjects
Author: Diaconis, Persi
Author: Graham, Ronald L.
Date: November 1979
Abstract:
A problem arising in taste testing, medical, and parapsychology
experiments can be modeled as follows. A deck of n cards contains $c_i$
cards labeled i, $1 \leq i \leq r$. A subject guesses at the cards
sequentially. After each guess the subject is told the card just
guessed (or at least if the guess was correct or not). We determine the
optimal and worst case strategies for subjects and the distribution of
the number of correct guesses under these strategies. We show how to
use skill scoring to evaluate such experiments in a way which
(asymptotically) does not depend on the subject's strategy.
CS-TR-79-775
Report Number: CS-TR-79-777
Institution: Stanford University, Department of Computer Science
Title: On constant weight codes and harmonious graphs
Author: Graham, Ronald L.
Author: Sloane, Neil J. A.
Date: December 1979
Abstract:
Very recently a new method has been developed for finding lower bounds
on the maximum number of codewords possible in a code of minimum
distance d and length n. This method has led in turn to a number of
interesting questions in graph theory and additive number theory. In
this brief survey we summarize some of these developments.
CS-TR-79-777
Report Number: CS-TR-79-778
Institution: Stanford University, Department of Computer Science
Title:
A hierarchical associative architecture for the parallel evaluation of
relational algebraic database primitives
Author: Shaw, David Elliot
Date: October 1979
Abstract:
Algorithms are described and analyzed for the efficient evaluation of
the primitive operators of a relational algebra on a proposed non-von
Neumann machine based on a hierarchy of associative storage devices.
This architecture permits an O(log n) decrease in time complexity over
the best known evaluation methods on a conventional computer system,
without the use of redundant storage, and using currently available and
potentially competitive technology. In many eases of practical import,
the proposed architecture may also permit a significant improvement (by
a factor roughly proportional to the capacity of the primary
associative storage device) over the performance of previously
implemented or proposed database machine architectures based on
associative secondary storage devices.
CS-TR-79-778
Report Number: CS-TR-79-781
Institution: Stanford University, Department of Computer Science
Title: Exploring the use of domain knowledge for query processing efficiency
Author: King, Jonathan J.
Date: December 1979
Abstract:
An approach to query optimization is described that draws on two
sources of knowledge: real world constraints on the values for the
application domain served by the database; and knowledge about the
current structure of the database and the cost of available retrieval
processes. Real world knowledge is embodied in rules that are much like
semantic integrity rules. The approach, called "query rephrasing", is
to generate semantic equivalents of user queries that cost less to
process than the original queries. The operation of a prototype system
based on this approach is discussed in the context of simple queries
which restrict a single file. The need for heuristics to limit the
generation of equivalent queries is also discussed, and a method using
"constraint thresholds" derived from a model of the retrieval process
is proposed.
CS-TR-79-781
Report Number: CS-TR-79-816
Institution: Stanford University, Department of Computer Science
Title: Automating the study of clinical hypotheses on a time-oriented data base: the RX Project
Author: Blum, Robert L.
Date: November 1979
Abstract:
The existence of large chronic disease data bases offers the
possibility of studying hypotheses of major medical importance. An
objective of the RX Project is to assist a clinical researcher with the
tasks of experimental design and statistical analysis. A major
component of RX is a knowledge base of medicine and statistics,
organized as a frame-based, taxonomic tree. RX determines confounding
variables, study design, and analytic techniques. It then gathers data,
analyzes it, and interprets results. The American Rheumatism
Association Medical Information System is used.
CS-TR-79-816