• Non ci sono risultati.

Operating Systems - Introduction

N/A
N/A
Protected

Academic year: 2021

Condividi "Operating Systems - Introduction"

Copied!
63
0
0

Testo completo

(1)

Scuola Superiore Sant’Anna

Operating Systems - Introduction

Giuseppe Lipari

(2)

Fundamentals

• Algorithm:

– It is the logical procedure to solve a certain problem

– Informally specified a a sequence of elementary steps that an

“execution machine” must follow to solve the problem

– not necessarily expressed in a formal programming language!

• Program:

– It is the implementation of an algorithm in a programming language

– Can be executed several times with different inputs

• Process:

(3)

Operating System

• An operating system is a program that

– Provides an “abstraction” of the physical machine through a simple interface

– Each part of the interface is a “service”

• An OS is also a resource manager

– With the term “resource” we denote all physical entities of a computing machine

– The OS provides access to the physical resources – The OS provides abstract resources (for example, a

file, a virtual page in memory, etc.)

(4)

Levels of abstraction

Kim Lisa Bill

Operating System

Interface (System API)

Virtual Memory Scheduler Virtual File Sys.

Device Driver

Device Driver

Device Driver

Device Driver

Device Driver

Device Driver Web

Browser Shell Videogame Printer Daemon User Level

Programmer Level System

Level

HW Level

(5)

Abstraction mechanisms

• Why abstraction?

– Programming the HW directly has several drawbacks

• It is difficult and error-prone

• It is not portable

– Suppose you want to write a program that reads a text file from disk and outputs it on the screen

• Without a proper interface it is virtually impossible!

(6)

Abstraction Mechanisms

• Application programming interface (API)

– Provides a convenient and uniform way to access to one service so that

• HW details are hidden to the high level programmer

• Applications do not depend on the specific HW

• The programmer can concentrate on higher level tasks

– Example

• For reading a file, linux and many other unix OS provide the

open(), read() system calls that, given a “file name” allow to

load the data from an external support

(7)

Historical Perspective

• In the beginning was the batch processor

– Huge machines, not very powerful

– Used mainly for scientific computation and military applications

– Program were executed one at time

• They were called jobs

– Program were simple sequential computations

• Read the input

• Compute

• Produce output

– Non-interactive!

(8)

Batch processor

• Batch = non-interactive

• The program could not be interrupted or suspended (non-preemptive)

• Scheduling:

Program Punch CPU Cards

Result

jobs

(9)

Drawbacks

• CPU was inactive for long intervals of time

– While reading the punch cards, the CPU had to wait – The punch card reader was very slow

• Solution: spooling

– Use a magnetic disk (a faster I/O device) – Job were grouped into “job pools”

– While executing one job of a pool, read the next one into disk

– When a job finishes, load the next one from the disk

– Spool = symultaneous peripheral operation on-line

(10)

Interactivity

• The need for interaction

– For reading input from the keyboard during the computation

– For showing intermediate results

– For saving intermediate result on magnetic support

• Input/output

– It can be done with a technique called polling

• Wait until the device is ready and get/put the data

• Handshaking

(11)

Multi-programming

• The natural evolution was “concurrency”

– IDEA: while a job is reading/writing from/to a I/O

device, schedule another job to execute (preemption)

CPU

Result

jobs

Preemption

(12)

Multi-programming

• Multi-programming is very common in real-life

– Consider a lawyer that has many clients

• FIFO policy: serving one client at time, from the beginning until the court sentence

• In italy, a sentence can be given after more than 10 years.

Imagine a poor lawyer trying to survive with one client for ten years!

• In reality, the lawyer adopts a TIME SHARING policy!

– All of us adopts a time-sharing policy when doing

many jobs at the same time!

(13)

The role of the Operating System

• Structure of a multi-programmed system

– Who decides when a job is suspended?

– Who decided who is to be executed next?

• In the first computers, these tasks were carried out by the application itself

• Each job could suspend itself and pass the “turn” to the next job (co-routines)

• However, this is not very general or portable!

– Today, the OS provide the multiprogramming services

• The scheduler module chooses which job executes next depending on the status of the system

(14)

Process Switch

Time sharing systems

• In time sharing systems

– The time line is divided into “slots”, or “rounds”, each one of maximum length equal to a fixed time

quantum

– If the executing job blocks on a I/O operation, or if the quantum finishes, it is suspended to be executed

later

CPU

jobs

(15)

Time sharing systems

• In time sharing systems

– Each process executes approximately as it were alone on a slower processor

– The OS (thanks to the scheduler) “virtualizes” the processor

• One single processor is seen as many (slower) parallel processors (one for each process)

– We will see that an OS can virtualize many HW resources

• Memory, disk, network, etc

• Time sharing systems are not predicatable

– The amount of execution time received by one process depends on the number of processes in the system

– If we want predictable behavior, we must use a RTOS

(16)

Multi-user systems

• The first computers were very powerful and very expensive

– An university could afford only one mainframe, but many people needed to access the same computer – Therefore, the mainframe would give simultanous

access to many users at the same time

– This is an obvious extension of the multi-process system

• One or more processes for each user

(17)

Multi-user systems

• The terminals had no computing power

– A keyboard + a monitor + a serial line

– Every computation was carried out in the mainframe

– It is like having one computer with many keyboards and videos

Mainframe

Dumb Terminal

Dumb Terminal

Dumb

Terminal

(18)

Multi-user system

• Another dimension was necessary

– The concept of user and account was born – The first privacy concerns were raised

• Access rules

• Passwords

• Criptography was applied for the first time in a non-military environment!

– This makes the system more complex!

(19)

Distributed systems

• Finally, distribution was introduced

– Thanks to the DARPA, the TCP/IP protocol was developed and internet was born

• The major universities in the USA connected their mainframes

• Mail, telnet, ftp, etc

• The natural evolution was internet and the world wide web

– All of this was possible thanks to

• The freedom of circulation of ideas

• The “liberal” environment in universities

• The need for communication and sharing information

(20)

Distributed systems

• More flexibility

– Client/server architectures

• One server provides “services” to remote clients

• Example: web, ftp, databases, etc

– It is possible to “distribute” an application

• Different “parts” execute on different computers and then communicate each other to exchange information and synchronise

• Massively parallel programs can be easily implemented

– Migration

• Processes can “move” from one computer to another to carry

(21)

Classification of Operating Systems

• The OS provides an abstraction of a physical machine

– To allow portability

– To make programmer’s life easier

• The level of abstraction depends on the application context

– the kind of services an OS provides depend on which kind of services the application requires

• General purpouses OS should provide a wide range of services to satisfy as many users as possible

• Specialised OS provide only a limited set of specialised services

– OS can be classified depending on the application context

• General purpouse (windows, linux, etc), servers, micro-kernel, embedded OS, real-time OS

(22)

Services

• Virtual processor

– An OS provides “concurrency” between processes

• Many processes are executed at the same time in the same system

• Each process executes for a fraction of the processor bandwidth (as it were on a dedicated slower processor)

– Provided by the scheduling sub-system

– Provided by almost all OS, from nano-kernels to

general-purpouse systems

(23)

Services

• Virtual memory

– Physical memory is limited;

– In old systems, the number of concurrent

processes was limited by the amount of physical memory

– IDEA: extend the physical memory by using a

“fast” mass storage system (disk)

• Some of the processes stay in memory, some are temporarily saved on the disk

• When a process must be executed, if on the disk, it is first loaded in memory and then executed

• This technique is called “swapping”

(24)

Virtual memory and physical memory

Process E Process D Process C Process B Process A

Process E Process A Process C

CPU A

C E

B D

B D

Physical memory Disk Virtual memory

A D B E C

A D B E C

Process B A

Process B Process A

(25)

Virtual Memory

• Advantages

– Virtual infinite memory

– The program is not limited by the size of the physical memory

• Disadvantages

– If we have too many programs, we spend most of the time swapping back and forth

– Performance degradation!

– Not suitable for real-time systems

• It is not possible to guarantee a short response time because it depends on the program location

(26)

Virtual File System

• Basic concepts

– File: sequence of data bytes

• It can be on a mass storage (hard disk, cd-rom, etc.)

• It can be on special virtual devices (i.e. RAM disks)

• It can be on a remote system!

– Directory: list of files

• Usually organised in a tree

• Represents how files are organised on the mass storage system

• Virtualisation

– In most OS, external serial devices (like the console or the

(27)

Virtual file system

• A good virtual file system provides additional features:

– Buffering & caching

• For optimising I/O from block devices

– Transactions

• For example the Reiser FS

– Fault tolerance capabilities

• For example, the RAID system

• Virtual file system is not provided by all OS categories

– Micro and nano kernels do not even provide a file system!

(28)

Privacy and access rules

• When many users are supported

– We must avoid that non-authorised users access restricted information

– Usually, there are two or more “classes” of users

• Supervisors

• Normal users

– Each resource in the system can be customised with

proper “access rules” that prevent access from non-

authorised users

(29)

Scuola Superiore Sant’Anna

Overview of hardware architectures

(30)

Basic blocks

CPU

BUS

MemoryMain Other I/O devices

Disk keyboard Video

(31)

The processor

• Set of registers

– IP: instruction pointer – SP: stack pointer

– A0-A3: general registers – CR: control register

• Execution unit

– Arithmetic unit – Fetching unit

– Branch prediction unit

• Other components

– Pipeline – Cache

CPU IP SP

R0 R1 R2 R3 Execution

Units CR

(32)

Processor registers

• User visible registers

– Used as temporary buffers for processor operations – Can be in any number

• RISC architectures: array of registers

• CISC architectures: set of registers dedicated to specific operations

• Control and Status registers

– IP Instruction pointer – SP Stack Pointer

– CR Control Register (or PSW Program Status Word)

(33)

Modes of operation

• Many processors have at least two modes of operation

– Supervisor mode

• All instructions are allowed

• Kernel routines execute in supervisor mode

• The OS must access all features of the system

– User mode

• Not all instructions are allowed

• User programs execute in user mode

• Some instruction (for example, disabling interrupts) cannot be invoked directly by user programs

• Switching

– It is possible to switch from user mode to supervisor mode with special instructions

(34)

Main Memory and bus

• The RAM

– Sequence of data locations

– Contains both instructions (TEXT) and data variables

• The bus

– A set of “wires”

• Address wires

• Data wires

– The number of data wires is the amount of bits that

can be read with one memory access

(35)

Instruction execution

• We distinguish at least two phases

– Fetching: the instruction is read from the memory – Execute: the instruction is executed

• Data processing instr. – the result is stored in registers

• Load instr. – the data is loaded from main memory

• Store – the data is stored in main memory

• Control – the flow of execution may change (change IP)

– Some instruction may be the combination of different types

Start

Fetch nextinstruction Execute

Halt

instruction

(36)

Stack Frame

Stack Frames

• The stack is used to

– Save local variables

– Implement function calling

• Every time a function is called

– The parameters are saved on the stack

– Call <address>: The current IP is saved on the stack

– The routine saves the registers that will be modified on the stack

– local variables are defined on the stack

Parameters IP

R0R1 R2xy BP

(37)

External devices

• I/O devices

– Set of data registers – Set of control registers – mapped on certain

memory locations

D0 CR0

CR1 CR2 D1

D2

BUS

CPU IP SP

R0 R1 R2 R3 CR

Memory

A3B0 A3B2 A3B4 A3B6 A3B8 A3BA A3BC

FF00 FF02 FF04

FF06 FF08 FF0A

(38)

I/O operations

• Structure of an I/O operation

– Phase 1: prepare the device for the operation

• In case of output, data is transferred to the data buffer registers

• The operation parameters are set with the control registers

• The operation is triggered

– Phase 2: wait for the operation to be performed

• Devices are much slower than the processor

• It may take a while to get/put the data on the device

– Phase 3: complete the operation

(39)

Example of input operation

• Phase 1: nothing

• Phase 2: wait until bit 0 of CR0 becomes 1

• Phase 3: read data from D0 and reset bit 0 of CR0

BUS

CPU IP SP

R0 R1 R2 R3 CR

D0 CR0

CR1 CR2 D1

D2

I/O device interface

FF00 FF02 FF04

FF06 FF08 FF0A CR0

D0 CR0

D0 R0

(40)

Example of output operation

• Phase 1: write data to D1 and set bit 0 of CR1

• Phase 2: wait for bit 1 of CR1 to become 1

• Phase 3: clean CR1

BUS

CPU

IP R0

R1

D0 CR0

CR1 D1

FF00 FF02

FF06 CR1 FF08

D1 D1

R0

CR1

R0

CR1

(41)

Temporal diagram

• Polling

– This technique is called “polling” because the

processor “polls” the device until the operation is completed

– In general, it can be a waste of time

– The processor can executed something useful while the device is working

– How the processor can know when the device

has completed the I/O operation?

(42)

Interrupts

• Every processor supports an interrupt mechanism

– The processor has a special pin, called “interrupt request (IRQ)”

– Upon reception of a signal on the IRQ pin,

• If interrupts are enabled, the processor suspends execution and invokes an “interrupt handler” routine

• If interrupts are disabled, the request is pending and will be served as soon as the interrupts are enabled

Interrupts? Serve Interrupt

(43)

Interrupt handling

• Every interrupt is associated one

“handler”

• When the interrupt arrives

– The processor suspend what is doing – Pushes CR on the stack

– Calls the handler (pushes the IP on the stack)

– The handler saves the registers that will be modified on the stack

– Executes the interrupt handling code – Restores the registers

– Executes IRET (restores IP and CR)

Stack

CR IP R0 R1

(44)

Input with interrupts

• Phase 1: do nothing

• Phase 2: execute other code

• Phase 3: upon reception of the interrupt, read data from D0, clean CR0 and return to the interrupted code

BUS

CPU

IP R0

R1

D0 CR0

CR1 D1

FF00 FF02

FF06 FF08 CR0

D0 CR0

D0 R0

(45)

Interrupts

• Let’s compare polling and interrupt

Normal code Interrupt

handler Phase 1

Phase 2 Phase 3 Polling

code

(46)

The meaning of phase 3

• Phase 3 is used to signal the device that the interrupt has been served

– It is an handshake protocol

• The device signal the interrupt

• The processor serves the interrupt and exchange the data

• The processor signal the device that it has finished serving the interrupt

• Now a new interrupt from the same device can be raised

(47)

Interrupt disabling

• Two special instructions

– STI: enables interrupts – CLI: disables interrupts

– These instructions are privileged

• Can be done only in supervisor mode

– When an interrupt arrives the processor goes automatically in supervisor mode

Normal code Interrupt handler

CLI STI

Pending Interrupt

(48)

Many sources of interrupts

• Usually, processor have one single IRQ pin

– However, there are several different I/O devices

– Intel processors use an external Interrupt Controller

• 8 IRQ input lines, one output line

BUS

CPU

IRQ0IRQ1 IRQ2

DeviceI/O

upt ller

(49)

Nesting interrupts

• Interrupt disabling

– With CLI, all interrupts are disabled

• When an interrupt is raised,

– before calling the interrupt handler, interrupts are automatically disabled

– However, it is possible to explicitely call STI to re- enable interrupts even during an interrupt handler – In this way, we can “nest interrupts”

• One interrupt handler can itself be interrupted by another

interrupt

(50)

Interrupt controller

• Interrupts have priority

– IRQ0 has the highest priority, IRQ7 the lowest

• When an interrupt from a I/O device is raised

– If there are other interrupts pending

• If it is the highest priority interrupt, it is forwarded to the processor (raising the IRQ line)

• Otherwise, it remains pending, and it will be served when

the processor finishes serving the current interrupt

(51)

Nesting interrupts

• Why nesting interrupts?

– If interrupts are not nested, important services many be delayed too much

• For example, IRQ0 is the timer interrupt

• The timer interrupt is used to set the time reference of the system

• If the timer interrupt is delayed too much, it can get lost (i.e.

another interrupt from the timer could arrive before the previous one is served)

• Losing a timer interrupt can cause losing the correct time reference in the OS

• Therefore, the timer interrupt has the highest priority and

can interrupt everything, even another “slower” interrupt

(52)

Nested interrupts

Slow Interrupt handler

High priority Interrupt handler

(53)

Atomicity

• An hardware instruction is atomic if it cannot be

“interleaved” with other instructions

– Atomic operations are always sequentialized – Atomic operations cannot be interrupted

• They are safe operations

• For example, transferring one word from memory to register or viceversa

– Non atomic operations can be interrupted

• They are not “safe” operations

• Non elementary operations are not atomic

(54)

Non atomic operations

• Consider a “simple” operation like

x = x+1;

 In assembler

LD R0, x INC R0 ST x,RO

 A simple operation like incrementing a memory variable, may consist of three machine instructions

 If the same operation is done inside an interrupt

(55)

Interrupt on non-atomic operations

int x=0;

...

x = x + 1;

...

Normal code

void handler(void) {

...

x = x + 1;

....

}

Handler code

...

LD R0, x INC R0 ST x, RO ...

Save registers ...

LD R0, x INC R0 ST x, RO

...Restore registers

R0

?

x

0

CPU

0

Saved registers

0 01

1 01

1

(56)

Solving the problem in single processor

• One possibility is to disable interrupts in “critical sections”

...

CLI

LD R0, x INC R0 ST x, RO STI

...

Save registers ...

LD R0, x INC R0 ST x, RO

...Restore registers

(57)

Multi-processor systems

• Symmetric multi-processors (SMP)

– Identical processors – One shared memory

CPU 0 CPU 1 CPU 2 CPU 3

Memory

(58)

Multi-processor systems

• Two typical organisations

– Master / Slave

• The OS runs on one processor only (master), CPU0

• When a process requires a OS service, sends a message to CPU0

– Symmetric

• One copy of the OS runs indipendentely on each processor

• They must synchronise on common data structures

• We will analyse this configuration later in the course

(59)

Low level synchronisation in SMP

• The atomicity problem cannot be solved by disabling the interrupts!

– If we disable the interrupts, we protect the code from interrupts.

– It is not easy to protect from other processors

...

LD R0, x INC R0 ST x, RO ...

...

LD R0, x INC R0 ST x, RO ...

...

LD R0, x (CPU 0) LD R0, x (CPU 1) INC R0 (CPU 0) INC R0 (CPU 1) ST x, R0 (CPU 0) ST x, R0 (CPU 1) ...

CPU 0

CPU 1

(60)

Low level synchronisation in SMP

• Most processors support some special instruction

– XCH Exchange register with memory location

– TST If memory location = 0, set location to 1 and return true (1), else return false (0)

void xch(register R, memory x) {

int tmp;

tmp = R; R = x; x=tmp;

}

int tst(int x) {

if (x == 1) return 0;

else { x=1;

return 1;

}

(61)

Locking in multi-processors

• We define one variable s

– If s == 0, then we can perform the critical operation

– If s == 1, the must wait before performing the critical operation

• Using XCH or TST we can implement two functions:

– lock() and unlock()

void lock(int s) {

int a = 1;

while (a==1) XCH (s,a);

}

void lock(int x) {

while (TST (s) == 0);

}

void unlock(int s) {

s = 0;

}

(62)

Locking in multi-processors

L0: TST s

JZ L0

LD R0, x

INC R0

ST x, R0 LD R1, 0 ST s, R1 ...

TST s (CPU 0) TST s (CPU 1) JZ L0 (CPU 0) JZ L0 (CPU 1) LD R0, x (CPU 0) TST s (CPU 1) INC R0 (CPU 0) JZ L0 (CPU 1) ST x, R0 (CPU 0) TST s (CPU 1) LD R1, 0 (CPU 0) JZ L0 (CPU 1) ST s, R1 (CPU 0) TST s (CPU 1)

... (CPU 0)

JZ L0 (CPU 1)

CPU 0

CPU 1

L0: TST s

JZ L0

LD R0, x INC R0 ST x, RO

Lock(s)

Unlock(s)

Unlock(s) Lock(s) x=x+1

x=x+1

(63)

Locking

• The lock / unlock operations are “safe”

– No matter how you interleave the operations, there is no possibility that the “critical parts interleave

– However, lock() is an active wait and a possible wast of time

• The problem of locking is very general

• Solutions will be presented and analysed in

greater details later in the course

Riferimenti

Documenti correlati

Solution proposed by Roberto Tauraso, Dipartimento di Matematica, Universit`a di Roma “Tor Vergata”, via della Ricerca Scientifica, 00133 Roma,

Without loss of generality we may assume that A, B, C are complex numbers along the unit circle. |z|

 The kernel knows where a signal handler returns, but the signal handler does not know.  The signal handler must operate in a compatible way with the original

Experimental tests show that the eigenvalues of P are all grouped in a circle with center in the origin and radius about 0.3 except one which is near 1.. Moreover, they show that

He proposed a three- fold model of Italian development: the ‘fi rst’ Italy of the industrial triangle (Milan-Turin-Genoa) characterized by the presence of large and

Then there is a co- ordinatization so that the right and middle associated groups (right and middle nuclei) are equal and contained in the left nucleus (coordinate kernel) of

It constitutes the &#34;scene&#34; where the need of reweaving the idea of development, unlimited growth and progress manifests both at national and international level through

This study showed a lower risk of SSI, even though not statistically significant, in surgical procedures performed in unidirectional operating theatres (U-OTs) with