Foundations of computer science download pdf




















Newer Post Older Post Home. Subscribe to: Post Comments Atom. Operations Management. Operations Management Course description : This operations management course is intended to be a survey of the operating practices and pro System Programming PPT slides. System Programming Instructor: Prof. Other editions — View all Foundations of Computer Science: There are no discussion topics on this book yet.

Based on the ACM model curriculum guidelines, this easy-to-read and easy-to-navigate text covers all the fundamentals of computer science required for first year students embarking on a computing degree. On the page no.

Books by Behrouz A. Foundations of Computer Science. Bowei rated it really liked it Sep 03, This easy-to-read and easy-to-navigate text covers all the fundamentals of computer science required for first year undergraduates embarking on a computing degree. After leaving the industry, he joined De Anza College as a professor of computer science. Read, highlight, and take notes, across web, tablet, and phone. Looking for beautiful books?

Foundations of computer science by Behrouz A Forouzan. Parmesan rated it it was amazing Beehrouz 16, Divided into five parts? The text is also supported by numerous figures, examples, coputer, selected solutions and a test bank, all designed to ease and aid the learning process.

If we are still at a branch node, then the effect is to update an existing array element. A similar function can shrink an array by one. Exercise 8. Then repeat this task using the order Gerald, 8 , Alice, 6 , Lucy, 9 , Tobias, 2. Why are results different? Why is that goal difficult to achieve? Comment on the suitability of your approach. All the other elements are to have their subscripts reduced by one.

The cost of this operation should be linear in the size of the array. Preorder, inorder and postorder tree traversals all have something in common: they are depth-first. At each node, the left subtree is entirely traversed before the right subtree. Depth-first traversals are easy to code and can be efficient, but they are ill-suited for some problems.

Suppose the tree represents the possible moves in a puzzle, and the pur- pose of the traversal is to search for a node containing a solution.

Then a depth-first traversal may find one solution node deep in the left subtree, when another solution is at the very top of the right subtree.

Often we want the shortest path to a solution. Suppose the tree is infinite. The ML datatype tree contains only finite trees, but ML can represent infinite trees by means discussed in Lect. Depth-first search is almost useless with infinite trees, for if the left subtree is infinite then it will never reach the right subtree.

A breadth-first traversal explores the nodes horizontally rather than ver- tically. When visiting a node, it does not traverse the subtrees until it has visited all other nodes at the current depth.

This is easily implemented by keeping a list of trees to visit. Initially, this list consists of one element: the entire tree. Each iteration removes a tree from the head of the list and adds its subtrees after the end of the list. At depth 10, the list could already contain elements. It requires a lot of space, and aggravates this with a gross misuse of append.

Evaluating ts [t,u] copies the long list ts just to insert two elements. A queue represents a sequence, allowing elements to be taken from the head and added to the tail. Lists can implement queues, but append is a poor means of adding elements to the tail. Our functional arrays Lect.

See ML for the Working Programmer, page Each operation would take O log n time for a queue of length n. We shall describe a representation of queues that is purely functional, based upon lists, and efficient. Operations take O 1 time when amortized : averaged over the lifetime of a queue. A conventional programming technique is to represent a queue by an array. Two indices point to the front and back of the queue, which may wrap around the end of the array.

The coding is somewhat tricky. Worse, the length of the queue must be given a fixed upper bound. Ideally, access should take constant time, O 1. It may appear that lists cannot provide such access. If enq q,x performs q [x], then this operation will be O n. We could represent queues by reversed lists, implementing enq q,x by x::q, but then the deq and qhd operations would be O n. Linear time is intolerable: a series of n queue operations could then require O n2 time.

The solution is to represent a queue by a pair of lists, where [x1 , x2 ,. The front part of the queue is stored in order, and the rear part is stored in reverse order. The enq operation adds elements to the rear part using cons, since this list is reversed; thus, enq takes constant time. The deq and qhd operations look at the front part, which normally takes constant time, since this list is stored in order. But sometimes deq removes the last element from the front part; when this happens, it reverses the rear part, which becomes the new front part.

Amortized time refers to the cost per operation averaged over the lifetime of any complete execution. Even for the worst possible execution, the average cost per operation turns out to be constant; see the analysis below. The empty queue, omitted to save space on the slide, has both parts empty. Functions deq and enq call norm to normalize their result. Because queues are in normal form, their head is certain to be in their front part, so qhd also omitted from the slide looks there.

Each enq operation will perform one cons, adding an element to the rear part. Since the final queue must be empty, each element of the rear part gets transferred to the front part.

The corresponding reversals perform one cons per element. Thus, the total cost of the series of queue operations is 2n cons operations, an average of 2 per operation. The amortized time is O 1. There is a catch. Unpredictable delays make the approach unsuitable for real-time programming, where deadlines must be met. When one matches, it evaluates the corresponding expression. It behaves precisely like the body of a function declaration.

We could have defined function wheels from Lect. A match may also appear after an exception handler Lect. This function implements the same algorithm as nbreadth but uses a different data structure.

It represents queues using type queue instead of type list. To compare their efficiency, I applied both functions to the full binary tree of depth 12, which contains labels. The function nbreadth took 30 seconds while breadth took only 0. For larger trees, the speedup would be greater. Choosing the right data structure pays handsomely. Since all nodes that are examined are also stored, the space and time requirements are both O bd.

It performs repeated depth-first searches with increasing depth bounds, each time discarding the result of the previous search. Thus it searches to depth 1, then to depth 2, and so on until it finds a solution. We can afford to discard previous results because the number of nodes is growing exponentially.

This is a constant factor; both algorithms have the same time complexity, O bd. The reduction in the space requirement is exponential, from O bd for breadth-first to O d for iterative deepening. Lists can easily implement stacks because both cons and hd affect the head. But unlike lists, stacks are often regarded as an imperative data structure: the effect of push or pop is to change an existing stack, not return a new one.

In conventional programming languages, a stack is often implemented by storing the elements in an array, using a variable the stack pointer to count them. Most language processors keep track of recursive function calls using an internal stack. Depth-first: use a stack efficient but incomplete Slide 2. Breadth-first: use a queue uses too much space!

Iterative deepening: use 1 to get benefits of 2 trades time for space 4. Best-first: use a priority queue heuristic search The data structure determines the search! Search procedures can be classified by the data structure used to store pending subtrees. Depth-first search stores them on a stack, which is implicit in functions like inorder, but can be made explicit. Breadth-first search stores such nodes in a queue.

An important variation is to store the nodes in a priority queue, which is an ordered sequence. The priority queue applies some sort of ranking function to the nodes, placing higher-ranked nodes before lower-ranked ones. The ranking function typically estimates the distance from the node to a solution. If the estimate is good, the solution is located swiftly. This method is called best-first search. The priority queue can be kept as a sorted list, although this is slow.

Binary search trees would be much better on average, and fancier data struc- tures improve matters further. For priority queues, see — Exercise 9. Outline the advantages and drawbacks of such an implementation compared with one presented above. Two indices into the array indicate the start and end of the queue, which wraps around from the end of the array to the start. How appropriate is such a data structure for implementing breadth-first search?

What search strategy is appropriate in this case? Functions represent algorithms and infinite data structures. Progress in programming languages can be measured by what abstrac- tions they admit. The idea that functions could be used as values in a computation arose early, but it took some time before the idea was fully realized. Many programming languages let functions be passed as arguments to other functions, but few take the trouble needed to allow functions to be returned as results.

In mathematics, a functional or higher-order function is a function that transforms other functions. Many functionals are familiar from mathematics, such as integral and differential operators of the calculus. To a mathemati- cian, a function is typically an infinite, uncomputable object. We use ML functions to represent algorithms. Sometimes they represent infinite collec- tions of data given by computation rules. Functions cannot be compared for equality.

The best we could do, with reasonable efficiency, would be to test identity of machine addresses. Two separate occurrences of the same function declaration would be regarded as unequal because they would be compiled to different machine addresses.

Such a low-level feature has no place in a language like ML. The fn-notation expresses a function value without giving the function a name. It cannot express recursion. Its main purpose is to package up small expressions that are to be applied repeatedly using some other function.

Each of these refers to some value of the argument a. They may also be omitted in prefix "Doctor " "Who". X Foundations of Computer Science Shorthand for Curried Functions A function-returning function is just a function of two arguments. The n-argument curried function f is conveniently declared using the syntax fun f x We now have two ways—pairs and currying—of expressing functions of multiple arguments.

Currying allows partial application, which is useful when fixing the first argument yields a function that is interesting R y in its own right. Functions ins and sort are declared locally, referring to lessequal.

Though it may not be obvious, insort is a curried function. Given its first argument, a predicate for comparing some particular type of items, it returns the function sort for sorting lists of that type of items. To exploit sorting to its full extent, we need the greatest flexibility in expressing orderings.

There are many types of basic data, such as integers, reals and strings. On the overhead, we sort integers and strings. This is no coding trick; it is justified in mathematics. There are many ways of combining orderings. Most important is the lexicographic ordering, in which two keys are used for comparisons. Often part of the data plays no role in the ordering; consider the text of the entries in an encyclopedia. These ways of combining orderings can be expressed in ML as functions that take orderings as arguments and return other orderings as results.

Numerical programming languages, such as Fortran, allow functions to be passed as arguments in this manner. Classical applications include numerical integration and root-finding. Thanks to currying, ML surpasses Fortran. Not only can f be passed as an argument to sum, but the result of doing so can itself be returned as another function. Given an integer argument, that function returns the result of summing values of f up to the specified bound.

Functionals, currying and fn-notation yield a language for expressions that is grounded in mathematics, concise and powerful. Frege discovered what we now call Currying: that having functions as values meant that functions of several arguments could be formalized using single-argument functions only. Currying is named after Haskell B.

Curry, who made deep investigations into the theory of combinators. However, Landin sketched out the main features of functional languages. Turner made the remarkable discovery that combinators hitherto thought to be of theoretical value only could be an effective means of implementing lazy evaluation.

X Foundations of Computer Science Learning guide. Related material is in ML for the Working Program- mer , pages — Exercise Explain how it allows function insort to sort a list of pairs, using both components in the comparisons. Does it matter whether the accumulator is the first, second or third argument? What is sum f? We again see the advantages of fn-notation, currying and map. Sometimes this coding style is cryptic, but it can be clear as crystal.

Treating functions as values lets us capture common program structures once and for all. This representation is not especially efficient compared with the conventional one using arrays.

Lists of lists turn up often, though, and we can see how to deal with them by taking familiar matrix operations as examples. ML for the Working Programmer goes as far as Gaussian elimination, which presents surprisingly few difficulties. The two functions expressed using map would otherwise have to be de- clared separately.

A simple case of matrix multiplication is when A consists of a single row and B consists of a single column. Coding matrix multiplication in a conventional programming language usually involves three nested loops. It is hard to avoid mistakes in the sub- scripting, which often runs slowly due to redundant internal calculations.

It yields a list, whose elements are the columns of B. Because dotprod is curried, it can be applied to a row of A. The resulting function is applied to all the columns of B. We have another example of currying and partial application.

The outer map applies dotprod to each row of A. The inner map, using fn-notation, applies dotprod row to each column of B. Compare with the version in ML for the Working Programmer, page 89, which does not use map and requires two additional function declarations.

In the dot product function, the two vectors must have the same length. Otherwise, exception Match is raised. While foldl takes the list elements from left to right, foldr takes them from right to left. Many recursive functions on lists can be expressed concisely. Some of them follow common idioms and are easily understood. But you can easily write incomprehensible code, too. The relationship between foldr and the list datatype is particularly close.

Using foldr would be less efficient, requiring linear instead of constant space. Append is expressed similarly, using op:: to stand for the cons function. The sum-of-sums computation is space-efficient: it does not form an intermediate list of sums. Moreover, foldl is iterative.

Carefully observe how the inner foldl expresses a function to add a number of a list; the outer foldl applies this function to each list in turn, accumulating a sum starting from zero. The nesting in the sum-of-sums calculation is typical of well-designed fold functionals. Similar functionals can be declared for other data struc- tures, such as trees. Nesting these functions provides a convenient means of operating on nested data structures, such as trees of lists. The length computation might be regarded as frivolous.

A trivial function is supplied using fn-notation; it ignores the list elements except to count them. Using foldl guarantees an iterative solution with an accumulator. The functional exists transforms a predicate into a predicate over lists. Given a list, exists p tests whether or not some list element satisfies p making it return true.

If it finds one, it stops searching immediately, thanks to the behaviour of orelse; this aspect of exists cannot be obtained using the fold functionals. Dually, we have a functional to test whether all list elements satisfy the predicate. If it finds a counterexample then it, too, stops searching. It applies a predicate to all the list elements, but instead of returning the resulting values which could only be true or false , it returns the list of elements satisfying the predicate.

But remember: the purpose of list functionals is not to replace the dec- larations of popular functions, which probably are available already. It is to eliminate the need for separate declarations of ad-hoc functions. When they are nested, like the calls to all in disjoint above, the inner functions are almost certainly one-offs, not worth declaring separately.

The functional maptree applies a function to every label of a tree, return- ing another tree of the same shape. Analogues of exists and all are trivial to declare. The easiest way of declaring a fold functional is as shown above.

The arguments f and e replace the constructors Br and Lf, respectively. To avoid this inconvenience, fold functionals for trees can implicitly treat the tree as a list. Our primitives themselves can be seen as a programming language. This truth is particularly obvious in the case of functionals, but it holds of pro- gramming in general. Part of the task of programming is to extend our programming language with notation for solving the problem at hand.

The levels of notation that we define should correspond to natural levels of ab- straction in the problem domain. The obvious solution requires declaring two recursive functions. Try to get away with only one by exploiting nested pattern-matching. Eliminate it by declaring a curried cons function and applying map.

Many operations could be performed on polynomials, so we shall have to simplify the problem drastically. We shall only consider functions to add and multiply polynomials in one variable. These functions are neither efficient nor accurate, but at least they make a start. Beware: efficient, general algorithms for polynomials are complicated enough to boggle the mind.

Although computers were originally invented for performing numerical arithmetic, scientists and engineers often prefer closed-form solutions to problems. A formula is more compact than a table of numbers, and its properties—the number of crossings through zero, for example—can be de- termined exactly.

Polynomials are a particularly simple kind of formula. A polynomial is a linear combination of products of certain variables. P For example, a polynomial in the variables x, y and z has the form ijk aijk xi y j z k , where only finitely many of the coefficients aijk are non-zero. Polynomials in one variable, say x, are called univariate. Even restricting ourselves to univariate polynomials does not make our task easy. This example demonstrates how to represent a non-trivial form of data and how to exploit basic algorithmic ideas to gain efficiency.

ML does not provide finite sets as a data structure. We could represent them by lists without repetitions. Finite sets are a simple example of data representation. A collection of abstract objects finite sets is represented using a set of concrete objects repetition-free lists.

Some concrete objects, such as [3, 3], represent no abstract object at all. Operations on the abstract data are defined in terms of the representa- tions. For example, the ML function inter Lect. It is easy to check that inter pre- serves the representation: its result is repetition-free provided its arguments are. Making the lists repetition-free makes the best possible use of space.

Time complexity could be improved. Forming the intersection of an m- element set and an n-element set requires finding all the elements they have in common.

It can only be done by trying all possibilities, taking O mn time. Sets of numbers, strings or other items possessing a total ordering should be represented by ordered lists. Some deeper issues can only be mentioned here.

For example, floating- point arithmetic implements real arithmetic only approximately. Instead we use a list of exponent, coefficient pairs with only nonzero coefficients: a sparse representation. Coefficients should be rational numbers: pairs of integers with no common factor. Exact rational arithmetic is easily done, but it requires arbitrary- precision integer arithmetic, which is too complicated for our purposes. We shall represent coefficients by the ML type real, which is far from ideal.

The code serves the purpose of illustrating some algorithms for polynomial arithmetic. To promote efficiency, we not only omit zero coefficients but store the pairs in decreasing order of exponents. The ordering allows algorithms resembling mergesort and allows at most one term to have a given exponent. The degree of a non-zero univariate polynomial is its largest exponent. For example, [ ,1. Our operations may assume their arguments to be valid polynomials and are required to deliver valid polynomials.

A bundle of declarations meeting the signature can be packaged as an ML structure. These concepts promote modularity, letting us keep the higher abstraction levels tidy. In particular, the structure might have the name Poly and its components could have the short names sum, prod, etc.

This course does not discuss ML modules, but a modular treatment of polynomials can be found in my book [13]. Modules are essential for building large systems. Function makepoly could convert a list to a valid polynomial, while destpoly could return the underlying list. For many abstract types, the underlying representation ought to be hidden. For dictionaries Lect. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent.

You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies.

It is mandatory to procure user consent prior to running these cookies on your website. Skip to content. This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.



0コメント

  • 1000 / 1000