Tuesday, September 1, 2009

Lesson10

COE 512 – Computer Systems Architecture

Compiled by: Engr. Guilbert Nicanor A. Atillo

 

Lesson 10 – Operating System -  Important Theory

Operating Systems - Some Theory

Introduction

In Chapter 8 we saw how the operation - in the sense of `driving' them - of computers evolved through users operating them, to operators, and finally through various sophistications of operating system software to timesharing operating systems.

First, we give a demonstration of the effect of multiprogramming.

Figures 9.1 and  9.2 shows the execution of three identical jobs, under two different operating system regimes.

Assume that the jobs/processes/programs are in memory to start with, that they do some computation (taking 10 millisec) which results in the requirement to read a particular record from a disk file (taking 10 millisec), finally they do some more computation (again 10 millisec). Viewed as executions of single programs this may not appear too useful - where do the results go to - but, in fact, the jobs are quite typical of what would be encountered by a timesharing during the execution lifetime of three processes

Figure 9.1 shows how the jobs would have to run on a single user system like MS-DOS. The computer is I/O blocked during each disk input - regardless of whether the input is by polled I/O, interrupts, or DMA.

Figure 9.2 shows the effect of multiprogramming; when one process become I/O bound, the CPU can do processing for another. Thus, a 90 millisec of computer usage can be reduced to 60; add more jobs and you get a greater increase in efficiency. NB. in this case, you are concerned with the method of I/O transfer - ideally the CPU asks for the data, and receives an interrupt when they are placed in memory.

 

  

Figure 9.1: Illustration of multiprocessing (a) single user

\begin{figure}\begin{tex2html_preform}\begin{verbatim}\vert Job A Job B Job C<br />\...<br />...ng on CPU<br />ooooo waiting for I/O\end{verbatim}\end{tex2html_preform}\end{figure}

 

 

  

Figure 9.2: Illustration of multiprocessing (b) multiprogrammed

\begin{figure}\begin{tex2html_preform}\begin{verbatim}\vert..........-----ooooo....<br />... for I/O<br />...... waiting for CPU\end{verbatim}\end{tex2html_preform}\end{figure}

 

The work that a modern timesharing operating system performs is a good deal more complex than that of, for example, a simple resident monitor - or even a single user OS like MS-DOS. Nevertheless, like all complex systems, the analysis of operating systems responds to breaking them into subsystems/components.

The following functions are required:

1. Dispatching or scheduling. Switching the processor from one process to another; allowing jobs to enter queue.

2. Interrupt handling.

3. Resource allocation. Allocation of the computer's resources to processes: memory, I/O devices, CPU time.

4. Resource protection. Ensuring that no process can access any resource which has not been allocated to it.

5. Provision of (virtual) I/O and other services.

6. Provision of a file system - to processes, and to human users.

7. Analysing and decoding user commands - and requests from programs, and converting these into procedure and program names.

The functions that meet these requirements, and the relationships between them are sometimes shown as a so-called onion-skin diagram; this concept of layers of skin is closely related to that of levels of abstraction given in chapter 8.

We shall now briefly look at a selection of topics that pertain to these requirements.

Processes

General

The concept of a process is central - first used by MULTICS. Sometimes referred to as a task. Definition can be difficult: `a program in execution' may be the best.

During its life, a process goes through a series of discrete states:

  • Running - if it currently has the CPU resource;
  • Ready - if it could use the CPU if the CPU were available;
  • Blocked - if it is waiting for some event to happen before it can proceed.

When a job is admitted to the system, a corresponding process is created and placed at the back of the queue and it is marked ready; alternatively, there may be separate queues. It gradually moves to the top of the ready list - see scheduling/dispatching below; when the CPU becomes available the process makes a state transition from ready to running. Strictly speaking this is dispatching.

To prevent any one process, say process A, monopolising the system - i.e. to provide time-sharing it must be possible possible to stop the process after it has consumed its allocated amount of CPU time. This is done via an interrupting clock (hardware) which allows a specified time quantum - a time-slot to pass; if process A is still using the CPU the clock generates an interrupt, causing the system to regain control. Process A now makes the state transition back to ready, i.e. A is put back in the ready queue.

The next process, process B, say, on the ready queue is now put into the running state; and so on ....

If a running process initiates an I/O operation before its time-slot expires then the process is forced to release the CPU; it enters the blocked state and releases the CPU.

If a process is in the blocked state and the event for which it is waiting completes, then the process makes the transition from blocked to ready. The four state transitions are shown in Figure 9.3

 

  

Figure 9.3: Process State Transitions: a Process can be running, blocked, or ready

\begin{figure}\centerline{<br />\hbox{<br />\psfig{figure=process.eps}<br />}}<br />\end{figure}

 

Process Control Block (PCB)

Because of this `switching on and off' of processes is is necessary to have housekeeping information associated with each process. A process control block PCB which is a record containing important current information about the process, for example: current state of the process, unique identification of the process, the process's priority, open files & I/O devices, use of main memory and backing store, other resources held, a register save area (see chapter 7).

The Kernel

All of the operations involving processes are controlled by a portion of the OS called its nucleus, core or kernel. The kernel represents only a small portion of the code of the entire OS but it is intensively used and so remains in primary storage while other portions may be transferred in and out of secondary storage as required.

One very important function of the kernel is interrupt processing. In large multi-user systems, the kernel is subjected to a constant barrage of interrupts. Clearly, rapid response is essential: to keep resources well utilised, and to provide acceptable response times for interactive users.

In addition, while dealing with an interrupt, the kernel disables other interrupts. Thus, to prevent interrupts being disabled for long periods of time, the kernel is designed to do the minimum amount of processing possible on each interrupt and to pass the remaining processing to an appropriate system process.

The kernel of a sophisticated OS may contain code to perform:

  • interrupt handling;
  • process creation & destruction;
  • process state switching and dispatching;
  • inter process communication;
  • manipulation of PCB;
  • support of I/O activities;
  • support of memory allocation & deallocation;
  • support of the file system;
  • support of system accounting functions.

Supporting hardware and microcode is often provided - even for portable operating systems like UNIX.

The kernel has privileges that allow it to do things that normal processes cannot do, e.g. access I/O devices. Privilege is usually conferred by the processor state, i.e. privilege level is stored in hardware, so a process has a different processor state from the kernel. In the original Intel 8086/88 and 80286, there was limited support for processor states; however, in the 80386 and after, full time-sharing was supported.

Scheduling

The goal of scheduling (the method by which processor time is managed) is to provide good service to all the processes that are currently competing for resources, including the CPU.

Three type of scheduling may be identified: long term scheduling, medium term scheduling, and short term scheduling.

We will be concerned only with short-term scheduling - also called dispatching, which decides how to share the computer among all the processes that currently want to compute. Such decisions may be made frequently - tens of times per second.

There are several competing goals that scheduling policies aim to fulfill - one is good throughput. For short term scheduling, this minimizes the number of process switches. Another goal is good (quick) service.

Scheduling Objectives

Some of the - often conflicting - objectives of a good scheduling algorithm/policy are:

1. Be fair - all processes should be treated equitably;

2. Maximize throughput and service, i.e. as many processes as possible per unit time;

3. Maximize the number of interactive users receiving acceptable response times;

4. Be predictable - a given job should run in about the same amount of time regardless of the load on the system;

5. Balance resource use - processes using underutilized resources should be favoured;

6. Achieve a balance between response and utilization;

7. Avoid indefinite postponement - as a process waits for a resource its priority should grow (aging);

8. Enforce priorities;

9. Give preference to processes holding key resources;

10. Give better service to processes exhibiting desirable behaviour e.g. low paging rates - see below;

11. Degrade gracefully under heavy loads.

Thus, scheduling may appear to be a complex problem.

Scheduling Policies, Algorithms

First Come, First Served (FCFS)

Under FCFS, the short term scheduler runs each process until it either terminates or leaves because of I/O or resource requests. Processes arriving wait in the order that they arrive.

FCFS is non-preemptive, so a process is never blocked once it has started to run until it leaves the domain of the short term scheduler.

FCFS is usually unsatisfactory because of the way that it favours long (greedy) processes - like the person in the bank queue in front of you with the weeks takings from a shop!

Round Robin (RR)

The intention is to provide good response ratios for short as well as long processes. This policy services (allocates CPU to) a process for a single time slot (length = q), where q = between 1/60 and 1 second; if the process is not finished it is interrupted at the end of the slot and placed at the rear of the ready queue & will wait for its turn again. New arrivals enter the ready queue at the rear.

RR can be tuned by adjusting q. If q is so high that it exceeds the service requirements for all processes, RR becomes the same as FCFS. As q tends to 0, process switching happens more frequently and eventually occupies all available time. Thus q should be set small enough so that RR is fair but high enough so that the amount of time spent switching processes is reasonable.

RR is preemptive, i.e. it will stop a process, if the process has not completed before the end of its time-slot.

Shortest Process Next (SPN)

SPN (like FCFS) is non-preemptive. It tries to improve response for short processes over FCFS. But, it requires explicit information about the service time requirements for each process. The process with the shortest time requirement is chosen.

Unfortunately, long processes wait a long time or suffer starvation. Short processes are treated very favourably.

Conclusion

  • Preemption is worth the extra switching cost. Since context must switch at every interrupt from the running process back to the kernel, it does not usually cost much extra for the kernel to make a decision and allow a different process to run.
  • The time-slot should be large enough so that preemption cost does not become excessive, e.g.. if a process switch costs 100 microseconds, the quantum should be about 100 times as long.
  • Some policies are more expensive than others to implement. FCFS only requires a queue for the ready list.
  • Many OSs use a hybrid policy, with RR for short processes and some other method for long.
  • Space management affects scheduling. Processes that need a substantial amount of memory are often given larger time-slots so that they can get more work done while they are occupying main store.

Memory Management

Figure 9.4 shows a five level storage organisation.

 

  

Figure 9.4: Five Levels of Memory/Storage.

\begin{figure}\begin{tex2html_preform}\begin{verbatim}+-------------------------...<br />...---------+---------------------+\end{verbatim}\end{tex2html_preform}\end{figure}

 

Data that is currently accessed most frequently should be in the highest level of storage hierarchy. Each offers about an order of magnitude faster access than its lower neighbour.

The relationship between each layer and the one below it can be considered as similar to that between cache and main memory - cache explained during tutorial or see a textbook. If we need to cache an item from a higher level then access is significantly slower. We call such a situation a cache miss. In contrast a cache hit is when we find the data we need in a cache and need not turn to the associated slower memory.

Physical and Virtual Store

The OS must arrange main memory so that processes remain ignorant of each other and still share main memory. A guiding principle is that the more frequently data are accessed, the faster the access should be. If swapping in and out of main storage becomes the dominant activity, then we refer to this situation as thrashing.

Physical store is the hardware memory on the machine, usually starting at physical address 0. Certain locations may be reserved for special purposes; the kernel may reside in in the low addresses. The rest of physical store may be partitioned into pieces for the processes in the ready list.

Each process has its own virtual memory - the memory perceived by the process. As far as the process is concerned all address references refer to virtual space. Each process' virtual memory is limited only by the machine address size.

Note: Virtual memory can also be very useful even in a single user system - if the computer has a larger address space than physical memory, and if programs need a large amount of memory; compare overlaying - in which the programmer takes responsibility for segmenting the program into memory sized chunks.

Cache Memory

The following, from [Tanenbaum, 1999], pages 65-67, gives a nice explanation of cache memory; cache memory is a small amount of fast memory near the CPU which in many cases mitigates the difficulties caused by the slow(ish) speed of main memory and by the bottleneck of MAR, MBR (see chapter 5). The basic principles employed in paging, see section  9.9, apply to caching.

Insert p. 65 here.

!Insert pp. 66, 67 here

!

!

Paging

 

Principle

The most common virtual storage technique is paging. Paging is designed to be transparent to the process - like all virtual memory.

The program and data of each process are regarded - by the OS as being partitioned into a number of pages of equal size. The computer's physical memory is similarly divided into a number of page frames. Page frames are allocated among processes by the OS At any time, each process will have a few pages in memory, while the remainder is in secondary memory.

The real point behind paging is that, whenever a process needs to access a location that is within a page that is in secondary memory, the page is fetched, invisibly to the process, and the access takes place, almost as if the datum had already been in memory. Thus, the process thinks that it has ALL of its virtual memory in main memory.

The remainder of this discussion can now proceed as if we are only concerned with a single process, which happens to be using a bigger virtual memory than can fit in main, physical, memory; ie. what is said is applicable to single user systems as well as multi-user. In addition, we shall simplify the arithmetic by assuming a page size size of 1000 - simpler for decimal oriented humans.

The process's virtual memory is divided into pages. Page 0 refers to locations 0 to 999, page 1: 1000 to 1999, and so on, ...

The CPU must transform from virtual address to physical address:

1. Calculate:

(a)

Virtual page number, i.e. address/1000;

(b)

Offset within that page, i.e. remainder after address/1000.

2. Calculate which page frame - physical page -, if any, that the virtual page occupies; this involves the use of a lookup table, the page table which links virtual pages with page frames e.g. Figure 9.5.

 

  

Figure 9.5: Page Table

\begin{figure}\begin{tex2html_preform}\begin{verbatim}Secondary<br />Process Page Pa...<br />...<br />----------- ------------- ----\end{verbatim}\end{tex2html_preform}\end{figure}

 

The third column - Secondary Store address - indicates where the OS must read/write if a page is to be transferred in/out of memory.

If the process needs a page which is not in memory - marked by a -1, for example-- a page fault is generated, and the OS sets about reading the required page into main memory. Obviously, some other page will have to be removed, to make space for the new page.

The whole paging process is illustrated by the pseudo-code in Figure 9.6. Think of PagedVirtualMemory being used in the situation where you have a PC with 32-bit addressing - 2 Gigabytes of virtual memory) but only 2K of physical memory available for data - forget about instructions for this example. Imagine that you want to use a 60K array of bytes. You use PagedVirtualMemory to access the array; the function hides all the hassle of address translation and paging.

 

  

Figure 9.6: Paging

\begin{figure}\begin{tex2html_preform}\begin{verbatim}byte PagedVirtualMemory(in...<br />...return byte at aa; //i.e. M[aa]}\end{verbatim}\end{tex2html_preform}\end{figure}

 

Choice of Page Size

The choice of page size determined at the design stage is a trade- off between various considerations:

  • Overhead space - least with large pages, since page tables have fewer entries.
  • Internal waste - smallest with small pages.
  • Cold start faults - lower with large pages. These are the faults needed to bring the program into main store the first time. With larger pages there are fewer total pages, hence fewer cold start faults.
  • Page fault rate - generally lower with small pages, assuming that a fixed amount of main store is available.
  • Efficiency of transfer between main memory and disk store - best with large pages, since they take about the same amount of time to transfer as small pages.

Larger pages are usually better, unless they become so large that too few processes in the ready list can have pages in main store at the same time. Common page sizes are from 512 bytes to 4K bytes but can be as large as 8K.

Page Replacement Policies

The storage manager must decide to keep some pages in main store and other pages in secondary store - unless virtual space requirements are less than the available physical space.

These decisions may be made at any time but when a page fault occurs a decision must be made; the process that suffered the page fault cannot continue until the appropriate page is swapped in.

A page replacement policy (PRP)/algorithm is used to decide which page frame to vacate ie. swap out its page.

Some commonly quoted strategies are:

Note: pages being heavily used should be preserved. One goal is to minimize the number of page faults. Swapping can reach a level at which all processes present need to swap so no progress can be made. If this persists, the OS is spending all its time transferring pages in and out of main store - known as thrashing.

First in, First out (FIFO)

Also called longest resident.

When a page frame is needed, the page that has been in main store for the longest time is chosen. This policy tends to save recently used pages which have a higher probability of being used again soon. However a frequently used page still gets swapped out when it gets to be old enough, even though it will have to be brought in immediately.

Least Recently Used (LRU)

This method is based on the assumption that the page reference pattern in the recent past is a mirror of the pattern in the near future. Pages that have been accessed recently are likely to continue to be accessed and ought to be kept in main store. It is expensive to implement exactly since in order to find the page least recently used, either a list of pages must be maintained in use order or each page must be marked with the time it was last accessed.

Review Questions:

  1. Explain interrupts and traps, and provide a detailed account of the procedure that an operating system handles an interrupt.
  2. What is the supervisor or kernel mode? What is the user mode? What are the differences? Why are they needed?
  3. Define the meaning of mechanism and policy in the separation of mechanism and policy principle.
  4. Most modern operating systems provide support for both processes and threads. Define process and thread. List two key differences and similarities between process and thread.
  5. For each of the following pairs of terms, define each term, making sure to clarify the key difference(s) between the two terms.

(a) “application software” and “system software”

              (b) “user mode” and “kernel mode”

(c) “single-core” and “multi-core”

(d) “text segment” and “data segment”

(e) “short-term scheduler” and “long-term scheduler”

              (f)   “waiting time” and “service time”

 

RESEARCH Problem:

 

Compare at least 20 commands for the following:  DOS, WINDOWS, LINUX, UBUNTU,and

UNIX.   A sample output is shown below. Please pass your hardcopy output on September 14, 2009.

 

 

Example:

 

DOS

WINDOWS

UNIX

LINUX

UBUNTU

 

Copy - this command can be used both to copy files from disk to disk or to create a second copy of a file on a single disk.

 

Ex. C> copy a:brazil1.dat b:\south\brazil2.dat

 

copy source [destination]

copy d:\i386\atapi.sy_ c:\windows\atapi.sys

 

cp firstfile secondfile

 

 

$ cp stuff stuff.bak

 

cp file_name/folder_name

 


Thursday, June 18, 2009

Lesson 1

LESSON 1

BASIC CONCEPTS

 

1.1 INTRODUCTION

Let us begin with the word ‘compute’. It means ‘to calculate’. We all are familiar with calculations in our day to day life. We apply mathematical operations like addition, subtraction, multiplication, etc. and many other formulae for calculations. Simpler calculations take less time. But complex calculations take much longer time. Another factor is accuracy in calculations. So man explored with the idea to develop a machine which can perform this type of arithmetic calculation faster and with full accuracy. This gave birth to a device or machine called ‘computer’.

 

The computer we see today is quite different from the one made in the beginning. The number of applications of a computer has increased, the speed and accuracy of calculation has increased. You must appreciate the impact of computers in our day to day life. Reservation of tickets in Air Lines and Railways, payment of telephone and electricity bills, deposits and withdrawals of money from banks, business data processing, medical diagnosis, weather forecasting, etc. are some of the areas where computer has become extremely useful.

However, there is one limitation of the computer. Human beings do calculations on their own. But computer is a dumb machine and it has to be given proper instructions to carry out its calculation. This is why we should know how a computer works.

 

1.2 OBJECTIVES

After going through this lesson you will be in a position to

  • define a computer
  • identify characteristics of computer
  • know the origin and evolution of computer
  • identify capability of computer in terms of speed and accuracy
  • distinguish computer from human beings and calculator
  • identify the role of computer
  • appreciate the evolution of computer through five generations

 

1.3 WHAT IS A COMPUTER?

Computer is an electronic device. As mentioned in the introduction it can do arithmetic calculations faster. But as you will see later it does much more than that. It can be compared to a magic box, which serves different purpose to different people. For a common man computer is simply a calculator, which works automatic and quite fast. For a person who knows much about it, computer is a machine capable of solving problems and manipulating data. It accepts data, processes the data by doing some mathematical and logical operations and gives us the desired output.

Therefore, we may define computer as a device that transforms data. Data can be anything like marks obtained by you in various subjects. It can also be name, age, sex, weight, height, etc. of all the students in your class or income, savings, investments, etc., of a country. Computer can be defined in terms of its functions. It can i) accept data ii) store data, iii) process data as desired, and iv) retrieve the stored data as and when required and v) print the result in desired format. You will know more about these functions as you go through the later lessons.

 

1.4 CHARACTERISTICS OF COMPUTER

Let us identify the major characteristics of computer. These can be discussed under the headings of speed, accuracy, diligence, versatility and memory.

 

1.4.1 Speed

As you know computer can work very fast. It takes only few seconds for calculations that we take hours to complete. Suppose you are asked to calculate the average monthly income of one thousand persons in your neighborhood. For this you have to add income from all sources for all persons on a day to day basis and find out the average for each one of them. How long will it take for you to do this? One day, two days or one week? Do you know your small computer can finish this work in few seconds? The weather forecasting that you see every day on TV is the results of compilation and analysis of huge amount of data on temperature, humidity, pressure, etc. of various places on computers. It takes few minutes for the computer to process this huge amount of data and give the result.

You will be surprised to know that computer can perform millions (1,000,000) of instructions and even more per second. Therefore, we determine the speed of computer in terms of microsecond (10-6 part of a second) or nano-second (10-9 part of a second). From this you can imagine how fast your computer performs work.

 

 

 

1.4.2 Accuracy

Suppose some one calculates faster but commits a lot of errors in computing. Such result is useless. There is another aspect. Suppose you want to divide 15 by 7. You may work out up to 2 decimal places and say the dividend is 2.14. I may calculate up to 4 decimal places and say that the result is 2.1428. Some one else may go up to 9 decimal places and say the result is 2.142857143. Hence, in addition to speed, the computer should have accuracy or correctness in computing.

The degree of accuracy of computer is very high and every calculation is performed with the same accuracy. The accuracy level is determined on the basis of design of computer. The errors in computer are due to human and inaccurate data.

 

1.4.3 Diligence

A computer is free from tiredness, lack of concentration, fatigue, etc. It can work for hours without creating any error. If millions of calculations are to be performed, a computer will perform every calculation with the same accuracy. Due to this capability it overpowers human being in routine type of work.

 

1.4.4 Versatility

It means the capacity to perform completely different type of work. You may use your computer to prepare payroll slips. Next moment you may use it for inventory management or to prepare electric bills.

 

1.4.5 Power of Remembering

Computer has the power of storing any amount of information or data. Any information can be stored and recalled as long as you require it, for any numbers of years. It depends entirely upon you how much data you want to store in a computer and when to lose or retrieve these data.

 

1.4.6 No IQ

Computer is a dumb machine and it cannot do any work without instruction from the user. It performs the instructions at tremendous speed and with accuracy. It is you to decide what you want to do and in what sequence. So a computer cannot take its own decision as you can.

 

1.4.7 No Feeling

It does not have feelings or emotion, taste, knowledge and experience. Thus it does not get tired even after long hours of work. It does not distinguish between users.

 

1.4.8 Storage

The Computer has an in-built memory where it can store a large amount of data. You can also store data in secondary storage devices such as floppies, which can be kept outside your computer and can be carried to other computers.

 

IN-TEXT QUESTIONS 1.1

 

1. What is a computer? Why is it known as data processor?

 

2. What are the important characteristics of computer?

 

 

1.5 HISTORY OF COMPUTER

 

History of computer could be traced back to the effort of man to count large numbers. This process of counting of large numbers generated various systems of numeration like Babylonian system of numeration, Greek system of numeration, Roman system of numeration and Indian system of numeration. Out of these the Indian system of numeration has been accepted universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Later you will know how the computer solves all calculations based on decimal system. But you will be surprised to know that the computer does not understand the decimal system and uses binary system of numeration for processing.

We will briefly discuss some of the path-breaking inventions in the field of computing devices.

1.5 .1 Calculating Machines

It took over generations for early man to build mechanical devices for counting large numbers. The first calculating device called ABACUS was developed by the Egyptian and Chinese people.

The word ABACUS means calculating board. It consisted of sticks in horizontal positions on which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds, etc.

 

1.5.2 Napier’s bones

English mathematician John Napier built a mechanical device for the purpose of multiplication in 1617 A D. The device was known as Napier’s bones.

 

1.5.3 Slide Rule

English mathematician Edmund Gunter developed the slide rule. This machine could perform operations like addition, subtraction, multiplication, and division. It was widely used in Europe in 16th century.

 

1.5.4 Pascal's Adding and Subtractory Machine

You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that could add and subtract. The machine consisted of wheels, gears and cylinders.

 

1.5.5 Leibniz’s Multiplication and Dividing Machine

The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical device that could both multiply and divide.

1.5.6 Babbage’s Analytical Engine

It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine to do complex mathematical calculations. It was called difference engine. Later he developed a general-purpose calculating machine called analytical engine. You should know that Charles Babbage is called the father of computer.

 

1.5.7 Mechanical and Electrical Calculator

In the beginning of 19th century the mechanical calculator was developed to perform all sorts of mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of mechanical calculator was replaced by electric motor. So it was called the electrical calculator.

 

1.5.8 Modern Electronic Calculator

The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky. Later it was replaced with transistors and as a result the size of calculators became too small.

The modern electronic calculator can compute all kinds of mathematical computations and mathematical functions. It can also be used to store some data permanently. Some calculators have in-built programs to perform some complicated calculations.

 

IN-TEXT QUESTIONS 1.2

 

1. What is the first mathematical device built and when was it built?

 

2. Who is called the father of Computer Technology.

 

1.6 COMPUTER GENERATIONS

You know that the evolution of computer started from 16th century and resulted in the form that we see today. The present day computer, however, has also undergone rapid change during the last fifty years. This period, during which the evolution of computer took place, can be divided into five distinct phases known as Generations of Computers. Each phase is distinguished from others on the basis of the type of switching circuits used.

 

1.6.1 First Generation Computers

First generation computers used Thermion valves. These computers were large in size and writing programs on them was

difficult. Some of the computers of this generation were:

 

ENIAC: It was the first electronic computer built in 1946 at University of Pennsylvania, USA by John Eckert and John Mauchy. It was named Electronic Numerical Integrator and Calculator (ENIAC). The ENIAC was 30_ 50 feet long, weighed 30 tons, contained 18,000 vacuum tubes, 70,000 registers, 10,000 capacitors and required 150,000 watts of electricity. Today your favorite computer is many times as powerful as ENIAC, still size is very small.

 

EDVAC: It stands for Electronic Discrete Variable Automatic Computer and was developed in 1950. The concept of storing data and instructions inside the computer was introduced here. This allowed much faster operation since the computer had rapid access to both data and instructions. The other advantages of storing instruction was that computer could do logical decision

internally.

 

Other Important Computers of First Generation

EDSAC: It stands for Electronic Delay Storage Automatic Computer and was developed by M.V. Wilkes at Cambridge University in 1949.

UNIVAC-1: Ecker and Mauchly produced it in 1951 by Universal Accounting Computer setup.

 

Limitations of First Generation Computer

Followings are the major drawbacks of First generation computers.

1. The operating speed was quite slow.

2. Power consumption was very high.

3. It required large space for installation.

4. The programming capability was quite low.

 

1.6.2 Second Generation Computers

Around 1955 a device called Transistor replaced the bulky electric tubes in the first generation computer. Transistors are smaller than electric tubes and have higher operating speed. They have no filament and require no heating. Manufacturing cost was also very low. Thus the size of the computer got reduced considerably.

It is in the second generation that the concept of Central Processing Unit (CPU), memory, programming language and input and output units were developed. The programming languages such as COBOL, FORTRAN were developed during this period. Some of the computers of the Second Generation were

 

1. IBM 1620: Its size was smaller as compared to First Generation computers and mostly used for scientific purpose.

 

2. IBM 1401: Its size was small to medium and used for business applications.

 

3. CDC 3600: Its size was large and is used for scientific purposes.

 

1.6.3 Third Generation Computers

The third generation computers were introduced in 1964. They used Integrated Circuits (ICs). These ICs are popularly known as Chips. A single IC has many transistors, registers and capacitors built on a single thin slice of silicon. So it is quite obvious that the size of the computer got further reduced. Some of the computers developed during this period were IBM-360, ICL-1900, IBM-370, and VAX-750. Higher level language such as BASIC (Beginners All purpose Symbolic Instruction Code) was developed during this period.

Computers of this generations were small in size, low cost, large memory and processing speed is very high.

 

1.6.4 Fourth Generation Computers

The present day computers that you see today are the fourth generation computers that started around 1975. It uses large scale Integrated Circuits (LSIC) built on a single silicon chip called microprocessors. Due to the development of microprocessor it is possible to place computer’s central processing unit (CPU) on single chip. These computers are called microcomputers. Later very large scale Integrated Circuits (VLSIC) replaced LSICs.

Thus the computer which was occupying a very large room in earlier days can now be placed on a table. The personal computer (PC) that you see in your school is a Fourth Generation Computer.

 

1.6.5 Fifth Generation Computer

The computers of 1990s are said to be Fifth Generation computers. The speed is extremely high in fifth generation computer. Apart from this it can perform parallel processing. The concept of Artificial intelligence has been introduced to allow the computer to take its own decision. It is still in a developmental stage.

 

 

1.7 TYPES OF COMPUTERS

Now let us discuss the varieties of computers that we see today. Although they belong to the fifth generation they can be divided into different categories depending upon the size, efficiency, memory and number of users. Broadly they can be divided it to the following categories. 

 

1. Microcomputer: Microcomputer is at the lowest end of the computer range in terms of speed and storage capacity. Its CPU is a microprocessor. The first microcomputers were built of 8-bit microprocessor chips. The most common application of personal computers (PC) is in this category. The PC supports a number of input and output devices. An improvement of 8-bit chip is 16-bit and 32-bit chips. Examples of microcomputer are IBM PC, PC-AT .

 

2. Mini Computer: This is designed to support more than one user at a time. It possesses large storage capacity and operates at a higher speed. The mini computer is used in multi-user system in which various users can work at the same time. This type of computer is generally used for processing large volume of data in an organisation. They are also used as servers in Local Area Networks (LAN).

 

3. Mainframes: These types of computers are generally 32-bit microprocessors. They operate at very high speed, have very large storage capacity and can handle the work load of many users. They are generally used in centralised databases. They are also used as controlling nodes in Wide Area Networks (WAN). Example of mainframes are DEC, ICL and IBM 3000 series.

 

4. Supercomputer: They are the fastest and most expensive machines. They have high processing speed compared to other computers. They have also multiprocessing technique. One of the ways in which supercomputers are built is by interconnecting hundreds of microprocessors. Supercomputers are mainly being used for whether forecasting, biomedical research, remote sensing, aircraft design and other areas of science and technology. Examples of supercomputers are CRAY YMP, CRAY2, NEC SX-3, CRAY XMP and PARAM from India.

 

IN-TEXT QUESTIONS 3

 

1. Into how many generations the evolution of computer is divided?

 

2. What is VLSIC?

 

3. The personal computer that you see today is in which generation of computer?

 

1.8 WHAT YOU HAVE LEARNT

 

In this lesson we have discussed about the major characteristics of computer. The speed, accuracy, memory and versatility are some of the features associated with a computer. But the computer that we see today has not developed over night. It has taken centuries of human effort to see the computer in its present form today. There are five generations of computer. Over these generations the physical size of computer has decreased, but on the other hand the processing speed of computer has improved

tremendously. We also discussed about the varieties of computers available today.

 

1.9 TERMINAL QUESTIONS

 

1. Why is computer known as data processor?

 

2. Explain in brief the various generations in computer technology?

 

3. Write a short note on Fifth Generation of computer. What makes it different from Fourth generation computer?

 

4. Why did the size of computer get reduced in third generation computer?

 

5. Give short notes on the following

(a) Versatility (b) Storage (c) Slide Rule (d) Babbage’s Analytical Engine

 

6. Distinguish between Microcomputer and Mainframe computer.

 

1.10 FEEDBACK TO IN-TEXT QUESTIONS

IN-TEXT QUESTIONS 1

 

1. A computer is an electronic device, which is used to accept, store, retrieve and process the data. It is called as data processor because it is mainly used for processing data for producing meaningful information.

 

2. The characteristics of computer are speed, accuracy, diligence, versatility and storage.

 

IN-TEXT QUESTIONS 2

 

1. Analytical engine, 1823.

 

2. Charles Babbage

 

IN-TEXT QUESTIONS 3

 

1. Five generations

 

2. Very Large Scale Integrated Circuits

 

3. Fourth Generation

 

1.11 TERMINAL QUESTIONS

1. Explain various types of computers.

2. Explain in brief the various generations in computer technology.

3. Write a short note on Fifth Generation of computer. What makes it different from Fourth Generation computer?

4. Why did the size of computer get reduced in Third Generation computer?

LESSON 2

COMPUTER ORGANISATION

2.1 INTRODUCTION

In the previous lesson we discussed about the evolution of computer. In this lesson we will provide you with an overview of the basic design of a computer. You will know how different parts of a computer are organised and how various operations are performed between different parts to do a specific task. As you know from the previous lesson the internal architecture of computer may differ from system to system, but the basic organisation remains the same for all computer systems.

2.2 OBJECTIVES

At the end of the lesson you will be able to:

understand basic organisation of computer system

understand the meaning of Arithmetic Logical Unit, Control Unit and Central Processing Unit

differentiate between bit , byte and a word

define computer memory

differentiate between primary memory and secondary memory

differentiate between primary storage and secondary storage units

differentiate between input devices and output devices

 

2.3 BASIC COMPUTER OPERATIONS

A computer as shown in Fig. 2.1 performs basically five major operations or functions irrespective of their size and make. These are 1) it accepts data or instructions by way of input, 2) it stores data, 3) it can process data as required by the user, 4) it gives results in the form of output, and 5) it controls all operations inside a computer. We discuss below each of these operations.

1. Input: This is the process of entering data and programs in to the computer system. You should know that computer is an electronic machine like any other machine which takes as inputs raw data and performs some processing giving out processed data. Therefore, the input unit takes data from us to the computer in an organized manner for processing.

Fig. 2.1 Basic computer Operations

2. Storage: The process of saving data and instructions permanently is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. Therefore the data is first stored in the storage unit for faster access and processing. This storage unit or the primary storage of the computer system is designed to do the above functionality. It provides space for storing data and instructions.

The storage unit performs the following major functions:

All data and instructions are stored here before and after processing.

Intermediate results of processing are also stored here.

 

3. Processing: The task of performing operations like arithmetic and logical operations is called processing. The Central Processing Unit (CPU) takes data and instructions from the storage unit and makes all sorts of calculations based on the instructions given and the type of data provided. It is then sent back to the storage unit.

4. Output: This is the process of producing results from the data for getting useful information. Similarly the output produced by the computer after processing must also be kept somewhere inside the computer before being given to you in human readable form. Again the output is also stored inside the computer for further processing.

5. Control: The manner how instructions are executed and the above operations are performed. Controlling of all operations like input, processing and output are performed by control unit. It takes care of step by step processing of all operations in side the computer.

2.4 FUNCTIONAL UNITS

In order to carry out the operations mentioned in the previous section the computer allocates the task between its various functional units. The computer system is divided into three separate units for its operation. They are 1) arithmetic logical unit, 2) control unit, and 3) central processing unit.

2.4.1 Arithmetic Logical Unit (ALU)

After you enter data through the input device it is stored in the primary storage unit. The actual processing of the data and instruction are performed by Arithmetic Logical Unit. The major operations performed by the ALU are addition, subtraction, multiplication, division, logic and comparison. Data is transferred to ALU from storage unit when required. After processing the output is returned back to storage unit for further processing or getting stored.

2.4.2 Control Unit (CU)

The next component of computer is the Control Unit, which acts like the supervisor seeing that things are done in proper fashion. The control unit determines the sequence in which computer programs and instructions are executed. Things like processing of programs stored in the main memory, interpretation of the instructions and issuing of signals for other units of the computer to execute them. It also acts as a switch board operator when several users access the computer simultaneously. Thereby it coordinates the activities of computer’s peripheral equipment as they perform the input and output. Therefore it is the manager of all operations mentioned in the previous section.

2.4.3 Central Processing Unit (CPU)

The ALU and the CU of a computer system are jointly known as the central processing unit. You may call CPU as the brain of any computer system. It is just like brain that takes all major decisions, makes all sorts of calculations and directs different parts of the computer functions by activating and controlling the operations.

 

HARDWARE

SOFTWARE