An address on a paging system is a logical page
number and an offset. The physical page is found by searching a table based on
the logical page number to produce a physical page number. Because the
operating system controls the contents of this table, it can limit a process to
accessing only those physical pages allocated to the process. There is no way
for a process to refer to a page it does not own because the page will not be
in the page table. To allow such access, an operating system simply needs to
allow entries for non-process memory to be added to the process’s page table.
This is useful when two or more processes need to exchange data—they just read
and write to the same physical addresses (which may be at varying logical
addresses). This makes for very efficient interprocess communication.
Saturday, 24 March 2018
Why are page sizes always powers of 2?
Recall that paging is implemented by breaking up
an address into a page and offset number. It is most efficient to break the
address into X page bits and Y offset bits, rather than perform arithmetic on
the address to calculate the page number and offset. Because each bit position
represents a power of 2, splitting an address between bits results in a page
size that is a power of 2.
What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
Thrashing is caused by under
allocation of the minimum number of pages required by a process, forcing it to
continuously page fault. The system can detect thrashing by evaluating the
level of CPU utilization as compared to the level of multiprogramming. It can
be eliminated by reducing the level of multiprogramming.
Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs
A page fault occurs when an access to a
page that has not been brought into main memory takes place. The operating
system verifies the memory access, aborting the program if it is invalid. If it
is valid, a free frame is located and I/O is requested to read the needed page
into the free frame. Upon completion of I/O, the process table and page table
are updated and the instruction is restarted.
Explain Segmentation with paging?
Segments can be of different
lengths, so it is harder to find a place for a segment in memory than a page.
With segmented virtual memory, we get the benefits of virtual memory but we
still have to do dynamic storage allocation of physical memory. In order to
avoid this, it is possible to combine segmentation and paging into a two-level
virtual memory system. Each segment descriptor points to page table for that
segment. This give some of the advantages of paging (easy placement) with some
of the advantages of segments (logical division of the program).
Define Demand Paging, Page fault interrupt, and Trashing?
Demand Paging: Demand paging
is the paging policy that a page is not read into memory until it is requested,
that is, until there is a page fault on the page.
Page fault interrupt: A page fault
interrupt occurs when a memory reference is made to a page that is not in
memory.
The present bit in the page table entry
will be found to be off by the virtual memory hardware and it will signal an
interrupt.
What is fragmentation? Different types of fragmentation?
Fragmentation
occurs in a dynamic memory allocation system when many of the free
blocks are too small to satisfy any request.
External
Fragmentation: External Fragmentation happens when a dynamic memory allocation
algorithm allocates some memory and a small piece is left over that cannot be
effectively used. If too much external fragmentation occurs, the amount of
usable memory is drastically reduced.
Total memory space exists to satisfy a
request, but it is not contiguous
Internal
Fragmentation: Internal fragmentation is the space wasted inside of allocated
memory blocks because of restriction on the allowed sizes of allocated blocks.
Allocated memory may be slightly larger
than requested memory; this size difference is memory internal to a partition,
but not being used
Reduce
external fragmentation by compaction
o
Shuffle memory contents to place all free memory together in one
large block.
o
Compaction is possible only if relocation is dynamic, and
is done at execution time.
What are the different Dynamic Storage-Allocation methods?
How to satisfy a request of size n
from a list of free holes?
First-fit: Allocate the first hole that is big
enough.
Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered by size. It produces the
smallest leftover hole.
Worst-fit: Allocate the largest hole; must also
search entire list. Produces the largest
leftover hole.
First-fit and best-fit are better than
worst-fit in terms of speed and storage utilization.
What are Dynamic Loading, Dynamic Linking and Overlays?
Dynamic Loading:
o
Routine is not loaded until it is called
o
Better memory-space utilization; unused routine is never loaded.
o
Useful when large amounts of code are needed to handle
infrequently occurring cases.
o
No special support from the operating system is required
implemented through program design.
Dynamic Linking:
Linking postponed until execution time.
o
Small piece of code, stub, used to locate the appropriate
memory-resident library routine.
o
Stub replaces itself with the address of the routine, and executes
the routine.
o
Operating system needed to check if routine is in processes’
memory address.
o
Dynamic linking is particularly useful for libraries.
Overlays:
·
Keep in memory only those instructions and data that are needed at
any given time.
·
Needed when process is larger than amount of memory allocated to
it.
Implemented by user, no special support needed from operating system,
programming design of overlay structure is complex.
Binding of Instructions and Data to Memory?
Address binding of
instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute
code can be generated; must recompile code if starting location changes.
Load time: Must generate relocatable code if
memory location is not known at compile time.
Execution time: Binding delayed until run time if the process
can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g.,
base and limit registers).
Difference between Logical and Physical Address Space?
·
The concept of a logical address space that is bound to a
separate physical address space is central to proper memory
management.
Logical address – generated by the CPU; also referred to
as virtual address.
Physical address – address seen by the memory unit.
·
Logical and physical addresses are the same in compile-time and
load-time address-binding schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme
Recovery from Deadlock?
Process Termination:
·
Abort all deadlocked processes.
·
Abort one process at a time until the deadlock cycle is
eliminated.
·
In which order should we choose to abort?
1) Priority of the process.
2) How long process has computed, and how
much longer to completion.
3) Resources the process has used.
4) Resources process needs to complete.
5) How many processes will need to be
terminated?
6) Is process interactive or batch?
Resource Preemption:
§ Selecting a
victim – minimize cost.
§ Rollback –
return to some safe state, restart process for that state.
§ Starvation –
same process may always be picked as victim, include number of rollback in cost
factor.
Deadlock Detection-Algorithm Usage?
·
When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled
back?
·
If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell which of the many deadlocked
processes “caused” the deadlock.
What is a Safe State and its’ use in deadlock avoidance?
When a process requests
an available resource, system must decide if immediate allocation leaves the
system in a safe state
v System is in
safe state if there exists a safe sequence of all processes.
v Sequence <P1,
P2… Pn> is safe if for each Pi, the resources that Pi
can still request can be satisfied by currently available resources + resources
held by all the Pj, with j<I.
If Pi resource needs are not immediately
available, then Pi can wait until all Pj have finished.
When Pj is finished, Pi can
obtain needed resources, execute, return allocated resources, and
terminate.
When Pi terminates, Pi+1 can
obtain its needed resources, and so on.
Deadlock Avoidance ⇒
ensure that a system will never enter an unsafe state.What are the Methods for Handling Deadlocks?
v Ensure that
the system will never enter a deadlock state.
v Allow the
system to enter a deadlock state and then recover.
v Ignore the
problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX.
Condition for deadlock occurrence?
Deadlock can arise if
four conditions hold simultaneously.
Mutual exclusion: only one process at a time can use a
resource.
Hold and wait: a process holding at least one resource is
waiting to acquire
additional resources
held by other processes.
No preemption: a resource can be released only voluntarily
by the process holding it, after that process has completed its task.
Different types of Real-Time Scheduling?
Hard real-time systems – required to complete a critical task within a guaranteed amount
of time.
Soft
real-time computing – requires
that critical processes receive priority over less
fortunate ones.
What is starvation and aging?
Starvation: Starvation is a resource management problem where a process does
not get the resources it needs for a long time because the resources are being
allocated to other processes.
Aging: Aging is a
technique to avoid starvation in a scheduling system. It works by adding an
aging factor to the priority of each request. The aging factor must increase the request’s priority as time passes
and must ensure that a request will
eventually be the highest priority request (after it has waited long enough)
Compare Linux credit based algorithm with other scheduling algorithms?
For the conventional time –shared
processes, Linux uses a prioritized, credit-based algorithm. Each process
possesses a certain number of scheduling credits; when a new task must be
chosen to run, the process with most credits is selected. Every time that a
timer interrupt occurs, the currently running process loses one credit; when
its credits reaches zero, it is suspended and another process is chosen.
If no runnable processes have any credits,
then Linux performs a recrediting operation, adding credits to every process in
the system (rather than just to the runnable ones), according to the following
rule:
Credits
= credits/2 + priority
The above scheduling
class is used for time-shared process and the in Linux for the real-time
scheduling is simpler it uses scheduling classes: first come, first served
(FCFS), and round-robin (RR) .In both cases, each process has a priority in
addition to its scheduling class. In time-sharing scheduling, however,
processes of different priorities can still compete with one another to some
extent; in real-time scheduling, the scheduler always runs the process with the
highest priority. Among processes of equal priority, it runs the process that
has been waiting longest. The only difference between FCFS and RR scheduling is
that FCFS processes continue to run until they either exit or block, whereas a
round-robin process will be preempted after a while and will be moved to the
end of the scheduling queue, so round-robin processes of equal priority will
automatically time share among themselves.
Linux’s real-time
scheduling is soft-real time rather than hard-real time. The scheduler offers
strict guarantees about the relative priorities of real-time processes, but the
kernel does not offer any guarantees about how quickly a real-time process will
be scheduled once that process becomes runnable.
Thus the
Linux uses different scheduling classes for time-shared and real-time
processes.
Give a non-computer example of pre-emptive and non-pre-emptive scheduling.
Consider any system where people use some kind of
resources and compete for them.
The
non-computer examples for preemptive scheduling the traffic on the single lane
road if there is emergency or there is an ambulance on the road the other
vehicles give path to the vehicles that are in need. The example for preemptive
scheduling is people standing in queue for tickets.
Describe the actions taken by thread library to context switch between user level threads?
The
thread library function performs the following actions to context switch between user
level threads
a) Copy all live registers to Thread control Block (TCB)
b) Restore the state of the thread to run
next i.e (copy the values of live registers from (TCB) to registers)
c) Move to the next thread to execute What is Multi programming, Multi tasking, Multi threading?
Multi programming
The concept of
multiprogramming is that the operating system keeps several jobs in memory
simultaneously.
Multi tasking: Multitasking is the
logical extension of multiprogramming .The concept of multitasking is quite
similar to multiprogramming but difference is that the switching between jobs
occurs so frequently that the userscan interact with each program while it is
running. This concept is also known as time-sharing systems.
Multi threading: An application
typically is implemented as a separate process with several threads of control.
In some situations a single application may be required to perform several
similar tasks for example a web server accepts client requests for web pages,
images, sound, and so forth. A busy web server may have several of clients
concurrently accessing it. If the web server ran as a traditional
single-threaded process, it would be able to serviconly one client
Difference between Primary storage and secondary storage?
Main memory: – only
large storage media that the CPU can access directly.
Secondary storage: – extension
of main memory that provides large non- volatile storage capacity
Common Functions of Interrupts?
v
Interrupt transfers control to the interrupt service routine
generally, through the interrupt vector, which contains the addresses of all
the service routines.
v
Interrupt architecture must save the address of the interrupted
instruction.
v
Incoming interrupts are disabled while another interrupt is being
processed to prevent a lost interrupt.
v
A trap is a software-generated interrupt caused either by an error
or a user request.
v
An operating system is interrupt driven.
Advantages of distributed systems
·
Resources Sharing
·
Computation speed up – load sharing
·
Reliability
·
Communications
Distributed Systems
Distribute the computation among several
physical processors.
Loosely coupled system – each
processor has its own local memory; processors communicate with one another
through various communications lines, such as high-speed buses or telephone
lines
Advantages of parallel system
·
Increased throughput
·
Economical
·
Increased reliability
·
graceful degradation
·
fail-soft systems
If two processes which shares same system memory and system clock in a
distributed system it is called parallel systemsParallel Systems
Multiprocessor systems with more than one CPU in
close communication.
Tightly coupled system – processors share memory and a clock; communication usually takes place through the shared memory.
Tightly coupled system – processors share memory and a clock; communication usually takes place through the shared memory.
What is DRAM? In which form does it store data?
DRAM is the
Hershey's chocolate of readable/writable , : it's not the best, but it's
cheap, does the job, and is available almost everywhere you look. DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to
lose data unless it's recharged every couple of milliseconds, and this
recharging tends to slow down the performance of DRAM compared to speedier RAM
types.
What is an Operating System?
A program that acts as an intermediary between a user of a
computer and the computer hardware.
Operating system goals:
·
Execute user programs and make solving user problems easier.
Make the computer system convenient to use. Use the computer hardware in an efficient
manner
What is Context Switch?
Switching the CPU to another process requires saving the state of
the old process and loading the saved state for the new process. This task is
known as a context switch.
Context-switch time is
pure overhead, because the system does no useful work while switching. Its
speed varies from machine to machine, depending on the memory speed, the number
of registers which must be copied, the existed of special instructions(such as
a single instruction to load or store all registers).
What are the main difference between Micro-Controller and Micro- Processor?
A microcontroller is by
definition a is a computer on a chip. It includes all the necessary parts
(including the memory) all in one IC. You just need to apply the power (and
possibly clock signal) to that device and it starts executing the program
programmed to it. A microcontroller generally has the main CPU core,
ROM/EPROM/EEPROM/FLASH, RAM and some necessary functions (like timers and I/O
controllers) all integrated into one chip. The original idea behind the
microcontroller was to limit the capabilities of the CPU itself, allowing a
complete computer (memory, I/O, interrupts, etc) to fit on the available
silicon real estate.
Microcontrollers are typically used
where processing power isn't so important. More important are generally compact
construction, small size, low power consumption and that those chips are cheap.
For example controlling a microwave oven is easily accomplished with the
smallest of microcontrollers. There is countless number of small electronic
devices which are nowadays based on microcontroller. A modern home can include
easily tens or hundreds of microcontrollers, as almost every modern device
which has electronics have a microcontroller (or more than one) inside.
Microprocessor is generally just the CPU core itself, although
nowadays it might have some accessory parts also integrated to the same chip.
Why paging is used?
Paging is solution to external fragmentation problem
which is to permit the logical address space of a process to be non-contiguous,
thus allowing a process to be allocating physical memory wherever the latter is
available.
What are different tasks of Lexical Analysis?
The purpose of the lexical analyzer is to partition the input text, delivering a sequence of
comments and basic symbols. Comments are character sequences to be ignored,
while basic symbols are character sequences that correspond to terminal symbols
of the grammar defining the phrase structure of the input
Describe different job scheduling in operating systems
Scheduling is the
activity of the deciding when process will receive the resources they request.
FCFS: --- FCSFS
stands for First Come First Served. In FCFS the job that has been waiting the
longest is served next.
Round Robin
Scheduling: ---Round Robin scheduling is a scheduling method where each
process gets a small quantity of time to run and then it is preempted and the
next process gets to run. This is called time-sharing and gives the effect of
all the processes running at the same time
Shortest Job
First: -- The Shortest job First scheduling algorithm is a nonpreemptive
scheduling algorithm that chooses the job that will execute the shortest amount
of time.
Priority
Scheduling: ---Priority scheduling is a scheduling method where at all times
the highest priority process is assigned the resource.
Differentiate between Complier and Interpreter?
An
interpreter reads one instruction at a time and carries out the actions
implied by that instruction. It does not
perform any translation. But a compiler
translates the entire instructions.
What is cache memory?
Cache
memory is random access memory (RAM) that a computer microprocessor can
access more quickly than it can access regular RAM. As the microprocessor
processes data, it looks first in the cache memory and if it finds the data
there (from a previous reading of data), it does not have to do the more
time-consuming reading of data from larger memory.
List out the software life cycle
·
Specification of the task
·
Design of algorithms
·
Implementation (coding)
·
Testing and debugging
·
Maintenance and evolution of the system
·
Obsolescence
Subscribe to:
Posts (Atom)
Java - Operations
3.1 Arithmetic Operators Java supports the following arithmetic operators: Operator Description Usage Exampl...
-
Scheduling is the activity of the deciding when process will receive the resources they request. FCFS: --- FCSFS stands for Firs...
-
3.1 Arithmetic Operators Java supports the following arithmetic operators: Operator Description Usage Exampl...
-
2.1 Variables Computer programs manipulate (or process) data. A variable is used to store a piece of data for processing. It is c...