Multics Operating System Case Study

Nick P • July 15, 2015 1:58 PM

@ Bruce

I like how Karger and Schell pre-empted a lot of future "discoveries" in that paper. Had they read it, those discoveries would've happened much sooner and we'd know a lot more. MULTICS was a good design that was a bit too pricey. Like all the best stuff, the cost of hardware back then probably hurt it the most. Lots of systems that allowed excellent security or software engineering had this problem. Less excuse today given an Intel processor can do over 100,000 context switches a second while still being 95+% idle (per Lynx Inc).

There's actually more opportunity today than ever given Intel and AMD are both doing semi-custom CPU's. One can do a UNIX or Windows variant that removes some cruft while adding in some critical protections from academic literature. There's literally no excuse outside of money or backward compability at this point.

@ Toby

I agree that KeyKOS was probably the best of the old OS's. It had a microkernel, enforced POLA on whole system, used the capability model, and persisted all running data to make for better crash recovery. Met extra security requirements with the KeySAFE addition. A modern variant of its was EROS and CheriBSD leverages the approach on capability-secure hardware.

Note: I was also wow'd by the Burroughs Architecture, which I found in recent years. I think it's superior to MULTICS architecture in a number of ways. Especially smart that they tried to deal with biggest trouble areas in hardware.

@ Nostromo

re C or C++ failure

Empirical studies done by military and defense contractors in 80's-90's showed C and C++ programmers wrote software with more defects than users of other languages. The reason, as others showed, is that the language does little to nothing to help developers. The Modula and Oberon languages gave code efficient enough for operating systems with safety features that knocked out a lot of problems. So, it can be done and even on 80's hardware. C is just sloppy on purpose. On the far end, one system language was designed to knock out as many errors as possible and those same studies showed it got results. One variant straight-up proves their absence.

So, C is garbage. It always was: a watered-down, ALGOL descendent whose weaknesses were compromises to deal with hardware at the time. That hardware is gone outside embedded world. Today's hardware is so powerful many apps run on heavy runtimes such as Java and C# with even OS prototypes written in those. Even mainstream has learned we can do better tradeoffs: Go, Rust, SWIFT, and especially Julia. So, I think moving our low-level efforts up a notch to something more like Modula-3 or Component Pascal to avoid all C's safety pitfalls is more than past due. Such languages typically do have an unsafe setting for the few modules and functions that need it.

Also, pretending C doesn't encourage problems doesn't help credibility given all the alternatives going back to the 60's that eliminate some of its problems. If the job is robust software, then might as well use robust languages and tools to build it.

@ supersnail

1. Sample size doesn't matter. Security evaluation happens on architecture, implementation, and so on. The paper was about the evaluation and pen testing of MULTICS along with commentary. Show me a UNIX appliance, I'll show you risk all over the place with proven examples and even actual CVE's from past or present. We'd have less to say about MULTICS during a similar review and even less about KeyKOS/EROS. That's better design in action.

2. I've illustrated C's problems above. The best approach, taken by competitors, was to make things safe by default while also efficient and allowing unsafe constructs in modules where necessary.

3. Most stuff designed has had poor security. If there's no accountability, it's usually even worse. A certification is an independent evaluation of claims against an agreed-upon criteria (eg Orange Book, Common Criteria). The pentests of IBM's VM/370, MULTICS, and UNIX showed severe problems along with proposed fixes. That's already proof of value of review process. Once good criteria developed, systems developed to it (esp A1-class systems) were pentested by NSA and did *way better* than what came before them with some going unbroken [with methods of the time]. Certifications can certainly be done in useless or weak ways but that's an argument against *bad* certifications rather than certification in general. Most lessons learned papers had positive things to say of the impact on quality of B3/A1/EAL6/EAL7 processes.

4. Modern OS's do distribute security fixes. Yet, the high[er] assurance OS's of past and present rarely needed security fixes to protect the system's integrity: they get it right the first time with strong design, isolation, detection, and recovery. The fixes at that point are typically for low- or de-privileged components to protect their attack surface or availability. Modern OS's, on the other hand, seem to have severe problems that compromise entire system all the time. Matter of fact, their architecture is so poor that you can compromise an entire system by opening an email. (!?) A sorrid state of affairs to say the least...

5. Windows has among the worst security foundations. It used a monolithic kernel, insecure interfaces, overprivileged components, overprivileged drivers, undocumented functions, hard to parse formats/protocols, weak protocols, and unsafe language for all of it. Predictably, it was the hardest hit for year after year until they finally got their shit together by having Steve Lipner, who did strong security with Karger, embed better security-review into their development processes. They also did some design and tooling changes that helped. Still has unjustifiable vulnerabilities (esp in old code), lacks a trusted path, and lacks POLA for untrusted apps.

@ Multiplexed

Although I argue for opposite architecture, OpenVMS was impressive to me in terms of its improvements on monolithic architecture, better security features (esp SEVMS), and robust implementation. It achieved far more than most monolithic designs in terms of manageability, security, and reliability. Bad news is it was doomed second HP took it over and had a competing product (NonStop). Good news is HP recently handed it off to another company to port it to x86 and old versions can still run on increasingly cheap Alpha/Itanium boxes on eBay. May not be dead yet. ;)

One reason for quality was development process. They worked regular shifts with weekends off. They spent a whole week adding features and tests for them. They ran the tests, including regression tests, over the weekend. They spent the next week fixing any problems based on priority. Run tests over weekend. Rinse, repeat... results. A simple method whose resulting quality is still better than many commercial firms despite state-of-the-art in QA advancing far past this.

Anyway, I always ask Linux or Windows snobs laughing about VMS if they have any system with 17 years uptime, a high-throughput box with 5 years uptime, or if they've ever forgotten how to reboot their system because they never do it? Common experiences for OpenVMS users. Not so much for Windows or Linux crowd despite hundreds of millions to billions in development effort put into them.

@ Karellen

"2.1 The security in Unix-type OSs is one of its main features.
2.2 The security is Unix OSs is built in to all versions"

UNIX started out *very insecure* and still has many weaknesses. Data Secure UNIX, Trusted Xenix, UNIX emulation on secure kernels... people worked extremely hard to achieve security with UNIX compatibility. They still had to change system calls, isolated security-critical functionality from main UNIX codebase, and achieved medium assurance at best. The strong stuff all was clean-slate and built for purpose. Check out the EROS link above or especially these systems to see how differently they're architected to prevent/isolate problems. I'll add that there's not a single case, outside hardware support, of anyone securing a monolithic kernel.

" Depends on development process for Unix system in question. Linux seems fairly resistant to injection of malicious code, as it's been picked up in the past. Can't talk about other unices."

The kernel development team does seem better than average on that. The distro's in general and various software that runs privileged? Plenty risk there...

"3.2.2 - Many Linux distros use signed archives/repositories to distribute updates which makes this attack hard. Other commercial unices tend to ship via post, which has been faked before, but is not common."

They run on machines that nation-states have 0-days for. It's why the high assurance systems of the past had to make a security argument from the bottom to the top. Strong crypto doesn't matter if the kernel or app gets broken. To see what it takes, here is a copy of the framework for security analysis of systems I posted in a conversation with someone thinking it just takes secure coding.

"3.2.3 - You need to be root to write to boot sectors on unix systems. If the attacker can run code as root to do this, you've already lost."

See setuid root. UNIX architects seemed all about getting attacker's code in root. TIS's Trusted Xenix OS cleverly removed the risk by making the kernel clear the bit during a write to an executable with admin or update process manually reseting it if change was legit. Couldn't get UNIX developers to adopt that, though. Haven't studied it in a long time so I'm not sure if setuid issues have been eliminated in modern distros. I just isolate the whole mess.

"Given that, I don't see how the claim that Multics is so much more secure than Unix systems, combined with C++ as a programming language, a pretty contemporary system, holds up."

(Multicians correct me if I'm wrong given I mainly read the papers...)

One point: microkernel. Take the number of flaws in UNIX's monolithic kernel which give total system access. Take the number in MULTIC's. The difference is how much more secure MULTICS is due to just that design choice. There's also the reduced number of easily-exploitable bugs due to programming language choice. The use of rings and segments can sandbox compromises of certain components/apps to the point that attackers might need more than one flaw (i.e. chained exploits) to hit a specific target. It also had a stack where incoming data didn't flow directly toward the stack pointer: a ludicrous design decision that inspired all kinds of UNIX workarounds (and stack overflows) that failed to fix the actual problem.

It's not the most secure design built back in the day. It was an early attempt that made interesting decisions that put it ahead of many competitors, eventually got a positive security evaluation, and informed design of future systems. Sound architectural and implementation decisions put it way ahead of UNIX in many ways. UNIX's name even comes from fact it was a knockoff of MULTICS for cheaper hardware. The more secure systems, such as GEMSOS or XTS-400, copied a number of MULTIC's design/implementation tactics. MULTICS itself is probably far from secure if we did a thorough evaluation on it with modern knowledge. However, I believe its main goal was to be as reliable as a utility (eg phone service) and I'll let the Multicians tell us if it achieved that. If it didn't, Tandem's NonStop Architecture certainly did later. ;)

@ Ian Mason

I believe this blog attracts a diverse and higher quality audience than most. Schneier's articles range from the old wisdom to the modern take on things. Unsurprising that many Multicians would converge here. I'm not from that era but have scoured the literature to collect as much wisdom from past efforts as possible. I regularly dump that info here to try to apply it to modern problems. Forgetting what's been learned, not applying proven methods, and reinventing the wheel are IT/INFOSEC's biggest problems.

Although I've linked to better systems, I'd still have settled for a modernized MULTICS far more quickly than a modernized UNIX. "Worse is better," though. (sighs) Although, KeyKOS and System/38 are my favorites of the old systems in sheer terms of what the architectures could accomplish to meet all ends. And were commercially successful with one still around to show up mainstream OS's. :)

@ Kathryn

"What can we do, as individuals, to encourage software suppliers to create secure products? "

Buy them and for a little more money. That simple. The tradeoff of secure systems is they might not support app X, feature Y, or price/performance ratio Z. Market wanted highest performance, lowest cost, and backward compatibility with inherently, insecure stuff. Vendors that made good stuff mostly went out of business (or that business) with AS/400 being sole survivor outside defense systems. Its security got watered down a bit, too, while functions were expanded and name changed to IBM i.

One difficulty in jump-starting this is that low-volume and niche market makes current offerings a bit expensive. Examples include Sentinel's HYDRA, Secure64's DNS on SourceT OS, LynxSecure virtualization, Mikro-SINA VPN, Cryptophone, and so on. Each do quite a bit better than similar products in their security engineering. They're also going to cost more with some costing *A LOT* more. Until market favors security enough, the combo of high development cost and low sales volume will keep licensing or per-unit prices pretty high. It's actually an ideal situation for non-profit or FOSS crowds to take over but they similarly insecure approaches as commercial sector. (sigh)

Best chance is some DARPA-funded, academic work being turned into a product. We've seen that happen plenty of times. A DARPA-funded, secure CPU and OS combo at least lets us build more secure appliances. Sales of those can gradually improve the platform and libraries toward an eventual general-purpose system. Right now, only one I know making headway on that approach with open-source is GenodeOS.

@ UNTER

Good points. Recognizing this problem was the brilliance of the AS/400, Mac OS, and Nitix product lines. They tried to hide as much complexity from users as possible. AS/400 and Nitix were largely self-managing. I still run into AS/400 systems that have been running largely unattended for almost a decade. New OS or security projects must embed security into architecture as such that day-to-day use in a secure fashion is the default and easy. Combex's CapDesk was a nice attempt at that.

@ Blair

Singularity has been superceded by VerveOS, whose safety is verified down to assembler. A project with similar, probably better, approach to security than Singularity is JX Operating System. Figure you might like it. Code is available from their web site.

@ lod

You're seeing the big picture a bit more than some. The actual problem starts with the hardware. I elaborate on that here with specific examples of how to do it better, old work on that, and recent work on it. In a nutshell, the basic constructs with which we build all software shouldn't be so out-of-control by default. Make safe or secure the default, then fewer problems follow even with sloppy coders: mistakes are often an exception or crash rather than a code injection.

@ ranz

Schell and especially Karger were great. Yet, I think the Burrough's, System/38, and capability designers (esp KeyKOS) were smarter in the end. Their mechanisms proved to enforce POLA for both business and military needs with greater efficiency for the better designs. Add easier updates, persistence, and mapping of requirements to model (easy as OOP, really) for a slam dunk argument. This is despite me spending years using and supporting Schell's approach. My long-term takeaway from Schell & Karger was their design, implementation, and evaluation approaches (esp in Orange Book). The basic foundation they and rest of the old guard laid down has proven to be something we're building on to this day. It's why i tell the young crowd that high assurance security stands on the shoulders of giants.

Plus, the systems back then were so much more interesting and innovative (see Flex machine with Ten15). We're seeing a ressurgence of innovation now due to cloud and embedded needs. Reinventing much old stuff. Getting fun again but not as secure or reliable. ;)

Note: I've strongly thought about seeing if Schell would port GEMSOS to SAFE or CHERI processors. The result would be a verified kernel with verified policy enforced at hardware level and with over 20yrs without breach. That would make a hell of a marketing pitch, eh? Also wouldn't have to depend on Intel (and insecure baggage) given SAFE or CHERI could be put on an FPGA (esp antifuse) running standalone.

@ Richard Lamson

"One of the security aspects of access control not mentioned above is that there were separate controls for read, write and execute on segments. Typically writable segments were not executable"

The segmented protections of MULTICS and other B3/A1 systems had a strong effect on resultant security. Recent INFOSEC research has rediscovered that. The Secure64's SourceT leverages similar protection in Itanium's paging hardware & memory keys. Native Client uses segments albeit in a weaker way. Most interesting I've seen is Code Pointer Integrity, which protects pointers with them.

Even Intel's literature on Atom processors said segments were 4x more efficient than the paging. So, fine grained protection, high efficiency, and still no adoption by most OS vendors. They could always get rid of management aspect by building that into their tools and libraries. No real effort though...

@ Multicians

Thanks for sharing your experiences. They were interesting reads as usual.

Presentation on theme: " Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 1 Operating Systems, 2015, Danny Hendler,"— Presentation transcript:

1  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 1 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

2 Segmentation  Several address spaces per process  a compiler needs segments for o source text o symbol table o constants segment o stack o parse tree o compiler executable code  Most of these segments grow during execution 2 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

3 Users' view of segments 3 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

4 Segmentation - segment table 4 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

5 Segmentation Hardware 5 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

6 Segmentation vs. Paging 6 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

7  Advantages: o Growing and shrinking independently o Sharing between processes simpler o Linking is easier o Protection easier  Disadvantages: o Pure segmentation --> external Fragmentation revisited o Segments may be very large. What if they don't fit into physical memory? Segmentation pros and cons 7 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

8 Segmentation Architecture  Logical address composed of the pair  Segment table – maps to linear address space; each table entry has: o base o base – contains the starting linear address where the segment resides in memory. o limit o limit – specifies the length of the segment.  Segment-table base register (STBR) points to the segment table’s location in memory.  Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR. 8 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

9 Segmentation Architecture ( Cont.)  Protection: each segment table entry contains: o validation bit = 0  illegal segment o read/write/execute privileges  Protection bits associated with segments; code sharing occurs at segment level.  Since segments vary in length, memory allocation is a dynamic storage-allocation problem (external fragmentation problem) 9 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

10 Sharing of segments 10 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

11 Segmentation with Paging  Segments may be too large  Cause external fragmentation  The two approaches may be combined: o Segment table. o Pages inside a segment. o Solves fragmentation problems.  Most systems today provide a combination of segmentation and paging 11 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

12  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 12 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

13 The MULTICS OS  Ran on Honeywell computers  Segmentation + paging  Up to 2 18 segments  Segment length up to 2 16 36-bit words  Each program has a segments table (itself a segment)  Each segment has a page table 13 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

14 MULTICS data-structures Segment 0 descriptor Segment 1 descriptor Segment 2 descriptor Segment 3 descriptor Segment 4 descriptor 18 bits Page 0 entry Page 1entry Page 2 entry Page table for segment 3 Page 0 entry Page 1entry Page 2 entry Page table for segment 1 Main memory address of the page table Segment length (in pages) 6 bits11133 36 bits Process descriptor segment Segment descriptor Page size: 0 – 1024 word 1 – 64 words 0 – paged 1 – not paged misc Protection bitsUnused 18 bits 14 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

15 MULTICS memory reference procedure 1. Use segment number to find segment descriptor Segment table is itself paged because it may be large. The descriptor- base-register points to its page table. 2. Check if segment’s page table is in memory o if not a segment fault occurs o if there is a protection violation TRAP (fault) 3. page table entry examined, a page fault may occur. o if page is in memory the start-of-page address is extracted from page table entry 4. offset is added to the page origin to construct main memory address 5. perform read/store etc. 15 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

16 MULTICS Address Translation Scheme Segment number (18 bits) Page number (6 bits) Page offset (10 bits) 16 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

17 MULTICS TLB  Simplified version of the MULTICS TLB  Existence of 2 page sizes makes actual TLB more complicated 17 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

18 Multics - Additional checks during segment link (call)  Since segments are mapped to files, ACLs (access-control list) are checked with first access (open)  Protection rings are checked A very advanced 1970's Architecture. 18 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

19  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 19 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

20 Pentium: Segmentation + paging  Segmentation with or without paging is possible  16K segments per process, segment size up to 4G 32-bit words  page size 4K  A single global GDT, each process has its own LDT  6 segment registers may store (16 bit) segment selectors: CS, DS, SS…  When the selector is loaded to a segment register, the corresponding descriptor is stored in microprogram registers Index 12 Privilege level (0-3) 0 = GDT/ 1 = LDT 13 Pentium segment selector 20 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

21 Pentium- segment descriptors Pentium code segment descriptor. Data segments differ slightly 21 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

22 Pentium - Forming the linear address  Segment descriptor is in internal (microcode) register  If segment is not zero (TRAP) or paged out (TRAP) o Offset size is checked against limit field of descriptor o Base field of descriptor is added to offset (4k page-size) 22 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

23 Intel Pentium address translation 10 12 Can cover up to 4 MB physical address space 23 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

24  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 24 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

25 UNIX process address space Process A BSS Init. Data Text Stack pointer 0 8K 20K Process B BSS Init. Data Text 0 8K 20K Physical memory Stack pointer OS 25 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

26 Memory-mapped file Process A BSS Data Text Stack pointer 0 8K 20K Process B BSS Data Text 0 8K 20K Physical memory Stack pointer OS Memory mapped file 26 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

27 Unix memory management sys calls  Not specified by POSIX  Common Unix system calls o s=brk(addr) – change data segment size. (addr sepcified the first address following new size) o a=mmap(addr,len,prot,flags,fd,offset) – map (open) file fd starting from offset in length len to virtual address addr (0 if OS is to set address) o s=unmap(addr,len) – unmap a file (or a portion of it) 27 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

28 Unix 4BSD memory organization Main memory Page frame 3 Page frame 2 Page frame 1 Page frame 0 Core map entries, one per page frame Core map entry Index of next entry Index of previous entry Used when page frame is on free list Disk block number Disk device number Block hash code Index into proc table Text/data/stack Offset within segment Misc. FreeIn transitWantedLocked Kernel 28 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

29 Unix Page Daemon  It is assumed useful to keep a pool of free pages  freeing of page frames is done by a pagedaemon - a process that sleeps most of the time  awakened periodically to inspect the state of memory - if less than ¼ 'th of page frames are free, then it frees page frames  this strategy performs better than evicting pages when needed (and writing the modified to disk in a hurry)  The net result is the use of all of available memory as page-pool  Uses a global clock algorithm – two-handed clock 29 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

30 Page replacement - Unix  a two-handed clock algorithm clears the reference bit first with the first hand and frees pages with its second hand. It has the parameter of the “angle” between the hands - small angle leaves only “busy” pages o If page is referenced before 2’nd hand comes, it will not be freed 30 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

31 Page replacement – Unix, cont'd  if there is thrashing, the swapper process removes processes to secondary storage o Remove processes idle for 20 sec or more o If none – swap out the oldest process out of the 4 largest  Who get swapped back is a function of: o Time out of memory o size 31 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

32  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 32 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

33 Linux processes  Each process gets 3GB virtual memory  Remaining 1GB for kernel and page tables  Virtual address space composed of areas with same protection, paging properties (pageable or not, direction of growth)  Each process has a linked list of areas, sorted by virtual address (text, data, memory-mapped-files,…) 33 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

34 Linux page tables organization (32 bits) 34 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan 32 bit architecture: Some pages 4K / Some pages 2M http://linux-mm.org/PageTableStructurehttp://linux-mm.org/PageTableStructure

35 Linux page tables organization (64 bits) 64 bit architecture: Some pages 4K / Some pages 2M http://linux-mm.org/PageTableStructurehttp://linux-mm.org/PageTableStructure Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan 35

36 Linux page tables organization DirectoryMiddlePageOffset Directory Page middle directory Page table Page Selected word This is the situation in Alpha. In Pentium, the page middle directory is degenerated. Expanded to 4-level indirect paging after Linux 2.6.10 36 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

37 Linux main memory management  Kernel never swapped  The rest: user pages, file system buffers, variable-size device drivers  The buddy algorithm is used. In addition: o Linked lists of same-size free blocks are maintained o To reduce internal fragmentation, a second memory allocation scheme (slab allocator) manages smaller units inside buddy-blocks  Demand paging (no pre-paging)  Dynamic backing store management 37 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

38 Linux page replacement algorithm  Variant of clock algorithm  Order of inspection of the page-freeing daemon is o By size of process – from large to small o In virtual address order (maybe unused ones are neighbors…)  Freed pages are categorized into clean; dirty; unbackedup  Another daemon writes up dirty pages periodically 38 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

39  Segmentation  Case studies o MULTICS o Pentium o Unix o Linux o Windows Memory management, part 3: outline 39 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

40 Win 2000: virtual address space  Virtual address space layout for 3 user processes  White areas are private per process  Shaded areas are shared among all processes What are the pros/cons of mapping kernel area into process address space? 40 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

41 Win 2000: memory mngmt. concepts  Each virtual page can be in one of following states: o Free/invalid – Currently not in use, a reference causes access violation o Committed – code/data was mapped to virtual page o Reserved – allocated to thread, not mapped yet. When a new thread starts, 1MB of process space is reserved to its stack o Readable/writable/executable  Dynamic (just-in-time) backing store management o Improves performance of writing modified data in chunks o Up to 16 pagefiles  Supports memory-mapped files 41 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

42 Implementation of Memory Management A page table entry for a mapped page on the Pentium 42 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

43 Win 2000: page replacement alg. working sets  Processes have working sets defined by two parameters - the minimal and maximal # of pages (i.e. the data structure WS)  the WS of processes is updated at the occurrence of each page fault (i.e. the data structure WS) - add o PF and WS < Min add to WS replace o PF and WS > Max replace in WS  If a process thrashes, its working set size is increased  Memory is managed by keeping a number of free pages, which is a complex function of memory use, at all times balance-set-manager  when the balance-set-manager is run (every second) and it needs to free pages - o surplus pages (to the WS) are removed from a process (large background before small foreground…) o Pages `age-counters’ are maintained (on a multi-processor refs bits don’t work since they are local…) 43 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

44 Physical Memory Management (1) Various page lists and transitions between them 44 Operating Systems, 2015, Danny Hendler, Meni Adler and Roie Zivan

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *