Talk:SITS

From Computer History Wiki
Revision as of 17:50, 27 October 2022 by Bqt (talk | contribs) (Virtual memory?)
Jump to: navigation, search

Virtual memory?

I've always understood 'virtual memory' to mean that not all the contents of a process' address space had to be resident in main memory for the process to run. (Any time it tries to use a missing 'piece' when it is running, it is stopped, the missing element is brought in, and then it is allowed to continue execution.) Was SITS really a 'virtual memory' OS, in that meaning of the term?

The PDP-11 was in theory capable of supporting operation in that way (using the 8 'pages' - they're actually 'segments', as the two were classically defined, and were so called in the first version of the -11/45 processor manual; my theory is that DEC changed it to 'pages' for marketing reasons), but AFAIK no -11 OS ever actually did so - probably because the -11's process address space was so small, there was no real need/use for that - it was simpler to just swap the whole process in. But I'm not very familiar with any -11 time-sharing OS other than UNIX, so perhaps one did? Jnc (talk) 13:50, 19 October 2022 (CEST)

SITS has a .MAP system call that can manipulate the page table. I haven't checked exactly how it's used, or what you can do. It seems starting a PDUMP binary does not do demand paging. However, it looks like .MAP can map in file pages, so maybe it's possible.
I certainly have seen page faults due to a page being unmapped, then handled, and then the faulting instruction being restarted. Larsbrinkhoff (talk) 14:23, 19 October 2022 (CEST)
That last does sound like virtual memory - even if it only happens on mapped file pages. Jnc (talk) 15:09, 19 October 2022 (CEST)
I've seen others misunderstanding virtual memory before. So I'll try and expand here. Virtual memory, just like virtual machine, means that you have the impression that you have your own instance of it (memory in this case), even though it don't actually exist. The fact that the memory isn't really the same as physical memory is the point. In your virtual memory, it appears as if *all* memory is there for you to use, and noone else exists. And it starts at address 0, and goes up to whatever top you might be able to address.
If there is a second process on the same machine, it also have virtual memory, which starts at address 0 and goes up from there. But even though both processes refer to address 0, they are not referring to the same memory cell. Because it's virtual memory, and not real. You commonly then have an MMU which maps this virtual memory at address 0 into some physical memory address. And at the physical memory level, these two processes obviously maps to different physical addresses.
Now, obviously, this virtual memory might not even exist in physical memory at a certain point in time. If the process runs with basically all of virtual memory always mapped to physical memory, or none of it, we're talking about a system that uses swapping. The other option is that only parts of virtual memory is mapped to physical memory. This is then usually done on a per page bases. And this is then where demand paging comes into the picture, which is the technique used to allow more virtual memory than physical memory for an individual process. With swapping, you might have more virtual memory needed for all processes combined than you have physical memory, but an individual process cannot run unless all of it fits into physical memory. Demand paging is commonly done at the OS level, so programs are unaware that this happens, just as with swapping. But there are also examples of more "manual" demand paging. For example, overlays, which were a common technique on some PDP-11 systems, is demand paging done at the user level instead of the kernel, but often it is still done without the software explicitly being aware of it. Instead the compiler in combination with libraries solves it for you.
The locality of the memory is not a defining factor of whether it is virtual or not. Your program will not be able to tell if the system swaps the whole process in at once, or if it does demand paging. From your programs point of view, it appears the same. You have your memory, and it's all there. Every address of it. As opposed to if you don't have virtual memory, and you'll have to share the memory with whatever else might be running at the same time. (I'm sure everyone can think of some system or other where this was the case.)
But demand paging and virtual memory are two different concepts. They work well together, but they are not interchangeable concepts.
Speaking about the PDP-11 hardware. Yes, the initial 11/45 processor handbook used the term segments, but every later documentation changed the term to pages. And I would argue that pages are a more correct term for what the PDP-11 MMU have. Segments are usually described by a base address and a length, and all your address space are then remapped based on this. The PDP-11 MMU have 8 pages. Each start at a fixed virtual address, 8K apart. One page seamlessly starts where the previous one ends, and it's based on the virtual address that you use. Thus, depending on how much space we're talking about, you are going to be using one or several pages to map this. And of course, the mapping for each page do not care about what the mapping for the previous page was. No need for contiguous physical memory to map your virtual memory. The main reason some people want to talk about the PDP-11 MMU pages as segments are because there is a high level of control of the size of the page. Which basically means the pages are not fixed in size (well, the have a max size that you can't go beyond), which was traditionally seen on other machines. But nowadays, this is actually pretty common on most architectures. Almost no modern architecture have just one fixed size page anymore. And the size of the pages have grown. The 8K size of the PDP-11 page is not even considered that big by today's standards. So there is pretty much nothing about the PDP-11 MMU pages that match what a segmented memory model works like, but pretty much everything is the same as most any other paged MMU. So why should we not call it pages?
Finally, demand paged systems can certainly be done on a PDP-11, even if it is hard to find examples of it being done. You could argue that the stack handling in BSD on the PDP-11 is using demand paging. Memory is not allocated for much of a stack when a program starts. When references happen below what is currently allocated for the stack, the stack is grown, and the additional memory gets mapped in as needed. However, once it's been mapped, it is then always in the address space, and a valid mapping exists at all times when the process is executing. So it's not a fully demand-paged example, but a bit of a hybrid thing.
--Bqt (talk) 17:50, 27 October 2022 (CEST)