Ib computer science topic 6 Topic 6.pptx

12130475 0 views 28 slides Oct 10, 2025
Slide 1
Slide 1 of 28
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28

About This Presentation

topic 6 ib css


Slide Content

Topic 6: Resource Management HL Extension

These are the resources mentioned in the guide: cache primary memory secondary storage disk storage processor speed sound processor graphics processor bandwidth screen resolution network connectivity 6.1.1 Identify resources that need to be managed within a computer system These can be grouped as: Storage Processing I/O (Input/Output)

Let's say you have two jobs to do: cooking a meal and washing some clothes. You could do them like this: Put the clothes into the washing machine Wait while they get washed Hang the clothes up Start the food cooking Wait till it cooks Serve the food But it would save time to do this: Put the clothes into the washing machine Start the food cooking Hang the clothes up Serve the food Why do resources need to be managed at all? Computer scientists worked out very early on (ie in the 1960s) that the most efficient way to use a computer is to get it to do several things at the same time . This concept was originally called multi-programming. This idea, together with a set of other strategies for sharing the CPU, are now called multi-tasking .

As soon as we have a multi-tasking system, we have the problem of how to share one set of resources among a group of running programs. How best to do this is the subject of a lot of research. There is not single best answer, but there are a lot of different strategies. Here are the main ones: Multiple CPUs ("cores") e.g. dual core, quad core , graphics processor, etc Time-slicing Prioritisation Polling Interrupts Blocking Swapping Multi-tasking A CPU process called the scheduler is responsible for deciding which programs get CPU time and when. There are all sorts of ways of doing it, some of which you will learn about more in your Computer System Architecture and Operating Systems courses at university. Check here for more details: http://en.wikipedia.org/wiki/Scheduling_(computing)

This is a modern IBM mainframe

supercomputers : Not mentioned in the guide, supercomputers are the most powerful computers. Their focus is computation rather than data or communication. They are used to run large scale simulations and are often used in the fields of military, scientific or economic simulations. They don't have many users; they just produce numerical results. mainframes : Mainframes are the most powerful types of commercial computer. Their emphasis is to process a large amount of data and to be online at all times. Mainframes are used by large corporations, especially those that specialize in data management or financial tracking. Every time you use an ATM or pay for something by credit card, it will be a mainframe that processes the transaction. They have a lot of everything: (parallel) processing power (CPU cores), very high RAM and bandwidth. They will also have highly resilient failover, redundancy and backup systems. servers : A server is the general name for a computer on a network that listens for requests from "clients" and supplies a response to the request. Examples are web servers (serving HTML pages or other web content), database servers (serving query results) and file servers (serving application files probably stored on a large disk on a LAN network). In general servers need to deal with a large number of connections, so they need high bandwidth connections to the network (which may be the internet), increased storage space to store users' files, and in some cases (e.g. SaaS) increased RAM in which to run users' processes. PCs : PCs (which include Macs!) are general purpose machines for the individual home or business user. Extra processing and memory (RAM) is required by specialist users, e.g. gamers or those working with graphics or animation.. sub-laptops : This is an out-of-date term and reflects the fact that the syllabus that you are working to was written ten years ago. It refers to small, cheap laptops that are highly portable but contain nothing more than the basic resources. A modern-day equivalent would be a Chromebook. cell phones : Cell phones are no longer regarded as phones with some computing capability; they are powerful computers in themselves. Only space limits their features, with users wanting fast internet and HD gaming and video. The process of computers and phones becoming one and the same thing is called "convergent technology". PDAs and digital cameras : PDAs are dead. Digital cameras are now only specialist, such as high-end DSLRs or highly portaable or underwater cameras like GoPro. They still obviously require disk space to store photos, ATD conversion, and some processing power, but they are very specialized. 6.1.2 Evaluate the resources available in a variety of computer systems

I guess this means what happens if you don't have enough… Processing power (a slow CPU): Remember the FE-Cycle? The clock speed of the processor tells you how many cycles it can do in a second. A faster processor speeds up everything your computer can do, as long is it not slowed down or kept busy but some other bottleneck . One, two four CPU cores: If you are running lots of different processes, then having four CPU cores is like having four computers, but if you are just doing one job then having four CPU cores will generally not improve the performance by four times. Why? Because most jobs are intrinsically difficult to parallelize (break in to pieces that can be done concurrently). Memory (RAM): Not having enough RAM is one of the principal reasons for a slow system. Imagine having a desk that's only the size of a page. You wouldn't be able to keep your notes, laptop, pens, calculator, etc, on the table at the same time. You would need to keep wasting time moving things off your desk (perhaps onto the floor) and picking other things up off the floor. This is exactly what happens when a computer doesn't have enough RAM. It has to keep swapping pages in and out of memory, on and off of the disk. This is known as "thrashing". Storage (Disk space): Having too little disk space won't generally slow your working down, but it will limit the number of applications that you can store and therefore run and it will reduce the amount of data you can store .. Paging file / swap space: This is a subtle point. You know that paging involves dividing memory up into equal sized chunks and swapping them from RAM and disk rapidly to allow more programs to run concurrently than would normally be possible using RAM alone. Well the amount of disk space that you allow the operating system to use for this purpose makes a difference. If you limit it, then the OS will not be able to take advantage of virtual memory. Most modern operating systems will allow you to limit swap file space, but by default they will manage it themselves. Bandwidth (network communication speed - bits per second): Obviously, if a computer cannot communicate quickly over the network then it will spend most of its time waiting for something to do. Computers dedicated to serving clients on a network require very fast internet connections. Overview: The important thing to remember is the a computer is only as fast as its slowest component. Having a very fast processor will not speed up your download speed and won't allow you to run lots of applications if it has to spend all of its time swapping pages between RAM and the hard disk because of a lack of memory. 6.1.3 Identify the limitations of a range of resources in a specified computer system The main resources to talk about are memory, CPU speed, storage space and bandwidth. The others are in there to augment your understanding. Still worth reading!!

Graphics Processing Unit A specialized CPU that is dedicated to processing graphics. Optimized to do the vector mathematics required for graphics manipulations. Particularly good at parallel processing because graphics operations can generally be done concurrently . Frees up the CPU to do more general processing. https://en.wikipedia.org/wiki/Graphics_processing_unit GPUs

Multiple CPUs ("cores") e.g. dual core, quad core, graphics processor, etc: It is obvious that more CPUs will give greater processing power but an extra layer of complexity is introduced in deciding which core should be used when. Another idea is to dedicated resources to a particular function. Graphics is a common one with modern high-performance gaming computers dedicating extra CPUs and RAM for use by graphics cards alone. Memory Management Policies: Multi-tasking OSs have to make decisions about how to use resources. The basis on which they make these decisions are called policies . Here's an example: The OS needs to swap a page back from the disk into memory. Which page does it swap out of memory to make room? Policies might be: Replace the page that has been in the memory for the longest time . Replace the least recently used page. Replace the least frequently used page. Scheduling policies: This is the term given to the method by which the OS decides which processes should be able to use which resources, when and for how long. A good example is which running process in a multi-tasking OS should get to use the CPU. The OS will generally try to keep all resources maximally utilized to avoid idle time. It may also try to minimize delays or maximise fairness in the use of resources. Round-robin : This is the idea that n running programs get one nth of the time available by the processor. This works if all running programs are as demanding of CPU time as each other, but this is seldom the case. Prioritisation: This is the concept that some running processes can be treated as more important than others and so they get more CPU time. First-in, first-out: As soon as a process is ready, it gets added to a queue. This can mean that one or two processes dominate the CPU. Polling: This is used by the CPU to find out if a program needs CPU time. Essentially the CPU keeps asking the program (or hardware device) over and over again. Polling and interrupting are alternative methods of achieving the same end and are dealt with separately on another slide. Interrupts: Instead of the CPU continually polling a process to see if it needs CPU time, it is left up to the process to "interrupt" the CPU and tell it that it needs CPU time. Blocking: This is a method by which a program can declare itself unable to proceed until some condition is met, ie a resource has become available, eg the hard disk, or some input has been provided by the user. Swapping: A blocked process can be "swapped out" of memory by the OS and its state saved to disk. When it is ready to be resumed, the OS can swap it back in and start running it again. This ensures that memory is not wasted. Swapping is an integral part of the virtual memory management technique of "paging". Important Concepts

Consider the following program: public static void main(String[] args){ Scanner in = new Scanner(System.in); System.out.println("Enter your name:"); String userName = in.readLine(); while (!userName.equalsIgnoreCase("x")){ System.out.println("Your name has " + userName.length() + " letters."; System.out.println("Enter your name:"); userName = in.readLine(); } } What does this program spend most o f its time doing? The fact is, this program spends the VAST majority of its time waiting for input at the calls to in.readLine(). In all computer systems, waiting for I/O (e.g. reading from or writing to disk, sending or receiving data on a network, etc) takes orders of magnitude longer than the execution of other program instructions. The Problem of I/O

CPUs can process data much faster than hardware devices (or people) can. Imagine having a conversation with someone who only says one word per minute , and who can only listen to what you're saying if you say it at the same slow speed. Very quickly you will find that you are spending the vast majority of your time sitting waiting. You might decide it's easier if she writes down her message, on a piece of paper, very slowly, while you go off and do something else . You can then come back later, quickly read the message, quickly write a reply and then go off and do something else again, while she takes ages reading it and writing her reply. This is precisely what happens when the CPU talks to a hardware device…. The piece of paper on which you write and receive your notes to and from the hardware device is called a buffer . A buffer allows the CPU to queue up a meaningful amount of work each time it communicates with a hardware device. Data transfer between the CPU and hardware devices

One of the main reasons I/O is so slow, is that it involves hardware, ie actually moving stuff in the physical world . That could be the read head on a disk drive , or the moving parts in printer . Moving these things is much, much slower than the speed at which electrical impulses travel around a silicon chip. You can think of the time wasted by I/O has being something like having a Skype chat by snail mail . It takes you seconds to write your message, but ages to get a reply. I/O and Hardware

It clearly makes sense to do something else while you're waiting for your snail mail reply if possible. (The alternative is known as "busy waiting", ie not doing anything, but not able to yield to another process either, and is clearly undesirable.) A program that is waiting for I/O and can't do anything until it arrives is said to be blocked on I/O . The OS detects this and swaps it out of memory and gets on with other tasks. But how does the OS know when your snail mail reply has arrived? Two options: It can keep checking for it ( polling ) It can have some sort of alert system that tells it ( interrupt ) Solving the I/O problem: Blocking It is possible simultaneously for process A to be blocked on a resource held by process B, and process B to be blocked on a resource held by process A. This situation is known as deadlock , and operating systems employs a variety of algorithms to detect and/or prevent it.

An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities. (Wikipedia) An interrupt is a signal that stops the CPU and forces it to do something else immediately. The interrupt does this without waiting for the current program to finish. It is unconditional and immediate which is why it is called an interrupt. The whole point of a interrupt is that the main program can perform a task without worrying about an external event. Programs cause these interrupts constantly. These are called software interrupts. Some hardware can interrupt the CPU in this way. This is called a hardware interrupt . Hardware Interrupts

If the piece of hardware cannot interrupt the CPU, then the CPU has to keep checking with the hardware, to see if it is finished. This used to be the norm on non-multitasking systems. After all, the CPU has nothing else to do except wait for the I/O to complete. Interrupts are used in almost all cases now, but polling can have better performance if: The CPU hasn't got anything else to do The result of the poll is very likely to be positive Polling

Interrupts save CPU time because the CPU doesn’t have to keep checking. ✓ But too many interrupts can slow the CPU down. ✗ Polling is easy to implement because the hardware doesn’t need to be able to do anything special. ✓ Polling gives the CPU more control over what does. ✓ Polling wastes CPU time. ✗ Verdict: Almost all hardware devices use interrupts where possible. Interrupts vs Polling

Think of a normal household telephone. Think of what happens when someone calls. How do you find out that someone wants to talk? Is this analogous to an interrupt or polling? Once you have decided which strategy this is analogous to, interrupt or polling, describe what a telephone would be like if it used the other strategy. Task Hello? vs

Because sending and receiving data to and from hardware peripherals is slow, the CPU often has to waste its time either: polling the device to see if it wants to read from or write to RAM being interrupted by device whenever it wants to read from or write to RAM A recent development, DMA, allows the hardware device to bypass the CPU and access RAM directly , to save or retrieve the data it needs Instead of the CPU having to be involved in the exchange of data, a DMA Controller (a bit like a mini-CPU dedicated to the task) coordinates the exchange DMA is suitable when the I/O has a high transmission rate, e.g. Ethernet, and there would be too many interrupts otherwise. (An interrupt will still be used to notify the CPU that the peripheral has finished its task, but no interrupts will have been necessary during data transfer.) A third way: direct memory access I/O device RAM Bypassed

Lots of operating systems, especially servers, are multi-user environments. The OS divides its time and resources up between users, just as it does between programs. The OS must manage each user's data and memory space, as well as each process's data and memory space, to ensure that it is secure from access by other users or processes. Multi-user environments In a multi-user environment the server must keep track not only of which parts of memory are being used by which process, but which parts of memory are being used by each user.

Multi-tasking environment: keeping the memory space of each process safe from other running processes Multi-user environment: keeping the memory space (primary and secondary) of each user safe from other users Allocating and deallocating memory for each process Paging : Dividing virtual memory up into equal-sized blocks (pages) Paging allows OSs to allocate non-contiguous chunks of memory to the same process, thus reducing fragmentation problems. See this video from the OCR A-Level . You don't actually need to know about segmentation, but it may help you to understand paging. Memory Management

Three processes are running and a fourth is launched. However, it can't fit in any of the contiguous blocks of memory. This is called EXTERNAL FRAGMENTATION. Paging allows running processes to be split into even sized chunks so that areas of memory can be used that might otherwise have been too small. Each chunk is called a page. This gets over the problem of external fragmentation, but, as you can see by the white areas within the pages, it creates a different type of problem called INTERNAL FRAGMENTATION. However, there is no perfect way to use memory. Question : Why do you think the OS doesn't just move process 3 so that there is enough space? Question: How does process 4 know where to find the half that has been loaded into a different page? Question: What is the relationship between paging and virtual memory? You need to know: Paging: Definition : Splitting memory up into same sized chunks called pages. Advantage : Reduces fragmentation by allowing a running process to occupy non-contiguous blocks of memory. Explanation : Allows running processes to be split across pages. This uses memory more effectively because unused areas of memory that would otherwise be too small to accommodate a process, can be used. Fragmentation and Paging

Disks can become fragmented too. This is known as data fragmentation and can have a serious effect on performance because the seek time of the read head and the rotational latency of the disk are much higher on a fragmented disk. The solution to this is a process called defragmentation during which the OS physically moves data about on the disk so that it is less fragmented. It can take hours to do this on a large disk. Some modern OSs now defragment 'on-the-fly', during idle processor time. Question: Why is data fragmentation not a problem on solid-state disks? Disk Fragmentation

In multitasking operating sytems, running processes cannot request any physical memory address because that address might be allocated to a different process. Instead, process reference their own virtual memory addresses and the OS translates these into physical addresses . Virtual memory: Some amount of storage, which may come from non-contiguous blocks of memory from a variety of physical devices including RAM and the hard disk, that is presented to the running process as a contiguous block of memory starting at address 0. Paging is not the same as virtual memory but is closely associated with it: the OS might swap out less frequently used pages from RAM and store them temporarily on disk. The running process knows nothing about this. The is an abstraction of memory which hides from each running process the complexity of where its data is physically stored. The MMU (Memory Management Unit) in the CPU is responsible for mapping virtual memory address (seen by running processes) to physical memory addresses (managed by the operating system). The use of secondary memory by the OS as if it were primary memory allows more processes to run simul taneously than would normally be possible. Normally, a running process's state is stored in RAM . But if it has been idle for a while, the OS can save its state to the hard disk , freeing up memory for another process. When necessary, it can read the process's state back off the disk and into RAM so that it can run again. The OS makes it easier for programs to reference memory because programs don’t need to worry about the complications of the underlying physical structure of memory and disk . Running processes don't know anything about how the OS is managing their memory. They just reference their virtual memory addresses and the OS takes care of the translation to a physical address.. This is another idea of abstraction by hiding complexity . Memory Management: Virtual Memory Wikipedia's virtual memory page

Single user, single task: Early computers used to be like this. You would write your program on punch cards and book time on the computer to run it. Users would have to queue up to use the computer. If your program generated an error, you would have to come back next week! Modern examples of single-user, single-tasking OS's are Palm OS and early versions of the iPhone and iPad. Mobile phones are slowly developing multi-tasking capability though. Single user, multi-tasking: A basic standalone home PC has one user who can run lots of different programs at the same time, e.g. Mac OS or Windows 7. Multi-user: A network operating system, such as the one at school, in which multiple users can run multiple programs simultaneously, e.g. Windows Server 2012. Types of Operating System

A dedicated operating system is designed to support a particular range of applications. They may be optimized for these applications and for a particular set of hardware. Examples are network operating systems, distributed operating systems and real-time operating systems (e.g. air-traffic control, medical/life-support). E.g. The QNX real-time medical OS advertises "Under QNX Neutrino, every driver, protocol stack, filesystem and application runs in the safety of memory-protected user space, outside the kernel. Virtually any component can fail — and be automatically restarted — without affecting other components or the kernel." Speed, reliability and fault tolerance are clearly important for medical device operating systems. OSs for mobile devices may be smaller than those for PCs. Some OSs offer customizations, e.g. Windows has a desktop version, a server version, a mobile version and an embedded systems version. Dedicated Operating Systems

Virtualization is the process of making a vi rtual rather than actual version of something. We usually use virtualization when the reality is rather complex and confusing . The virtual interface to something makes the reality seem uniform , thereby simplifying its use. Virtual memory is an example. The OS presents a simple uniform list of addresses that a running program can access, but behind the scenes the program's data may be stored all over the place, in cache, RAM, disk, or on a network. OS's may also virtualize storage , presenting drives as a homogeneous set of letters , when in fact some may be USB drives, some may be hard disks, some may be DVD-ROM. Dropbox is a good example of virtualization. It's a sort of virtual folder . It looks and behaves like any other folder. But behind the scenes it is quite different. Virtualization is all about hiding the complexity of the system. It is another example of abstraction . The JVM (Java Virtual Machine) is the program that compiles and runs Java programs. The Java program itself doesn't care which operating system it's on because it only interacts with the JVM. Once a JVM has been written for a particular OS, then any number of Java programs can be written. This is how Java achieves platform-independence . Virtualization

Past paper questions This topic is specified by the IB as 9 hours out of a taught HL course of 125 hours. It can only be examined in Paper 1, since Paper 2 is the option (Java in your case) and paper 3 is the case study. Paper 1 is 100 marks in 2h 10. 9 hours / 125 hours = 7.2% So you should expect about 7 marks worth of questions from this topic in your Paper 1 exam. In fact, over the past few papers, there has only been 2 marks worth of questions on this topic. Perhaps I should have told you that at the start.
Tags