0% found this document useful (0 votes)
42 views

Context Switching

The document discusses context switching between user-level threads and between user mode and kernel mode on a CPU. A context switch between threads does not involve the kernel as the kernel is unaware of threads. Switching between user and kernel mode is more complex, requiring a change in processor privilege level and coordination between user and kernel code. When a system call or exception occurs, the CPU switches to kernel mode by setting registers, jumping to an exception handler, and saving user context to the kernel stack to then call kernel trap handling functions.

Uploaded by

Jordan Weeks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Context Switching

The document discusses context switching between user-level threads and between user mode and kernel mode on a CPU. A context switch between threads does not involve the kernel as the kernel is unaware of threads. Switching between user and kernel mode is more complex, requiring a change in processor privilege level and coordination between user and kernel code. When a system call or exception occurs, the CPU switches to kernel mode by setting registers, jumping to an exception handler, and saving user context to the kernel stack to then call kernel trap handling functions.

Uploaded by

Jordan Weeks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Context

Switches
CS 161: Lecture 3
2/2/17
Context Switching
• A context switch between two user-level threads does not involve the
kernel
• In fact, the kernel isn’t even aware of the existence of the threads!
• The user-level code must save/restore register state, swap stack pointers, etc.
• Switching from user-mode to kernel-mode (and vice versa) is more
complicated
• The privilege level of the processor must change, the user-level and kernel-
level have to agree on how to pass information back and forth, etc.
• Consider what happens when user-level code makes a system call . . .
/* kern/include/kern/syscall.h */
/*
// userland/lib/libc/arch/mips/syscalls-mips.S */ --
-- Process-related
/*
#define SYS_fork 0
* The MIPS
#define syscall ABI is1 as follows:
SYS_vfork
* On SYS_execv
#define entry, call number
2 in v0. The rest is like a
* normal
#define function call:
SYS__exit 3 four args in a0-a3, the
* . .other
// . etcargs
. . on
. the stack.
*
// -- File-handle-related --
* On SYS_open
#define successful return,
45 zero in a3 register; return
* value
#define in v0 (v0 and 46
SYS_pipe v1 for a 64-bit return value).
*
#define SYS_dup 47
* On SYS_dup2
#define error return, nonzero
48 in a3 register; errno value
* in SYS_close
#define v0. 49
*/
#define SYS_read 50
// . . . etc . . .
User-mode
/* kern/arch/mips/include/trapframe.h Executing
Standard
*/ syscall or
address space
/* MIPS exception codes. */ registers causing another trap induces
#define EX_IRQ 0 /* Interrupt */the processor to:
User
#define EX_MOD
foo() 1 /* TLB SPModify (write• Assign
tovalues to special
read-only
registers in “Coprocessor 0”
stack * page) */ • Jump to the hardwired
bar()
#define EX_TLBL 2
PC
/* TLB miss on load address
*/ 0x80000080
close_stdout()
#define EX_TLBS 3 EPC: Address
/* TLB miss on • store */ of instruction
#define EX_ADEL 4 /* Address errorwhich caused
on load */ trap
#define EX_ADES 5 /* Address error • Cause: Set to*/
on store enum code
#define EX_IBEHeap 6 /* BusEPCerror on representing
instructionthe trap reason
fetch */
Static data (e.g., sys call, interrupt); if
#define EX_DBE 7 /* Bus error on trap datawas interrupt,
load bits are*/
*or* store
//close_stdout()
#define EX_SYS 8 Cause */ set to indicate type (e.g.,
/* Syscall
User li
#define a0,
EX_BP1 9 /* Breakpoint */ timer)
code li
#define v0,
EX_RI49 10 /* Status
Reserved • Status: Ininstruction
(illegal) response to trap,
*/
/*Coprocessor hardware sets bits that
syscall
#define EX_CPU 11 Coprocessor0 elevate privilege mode,
unusable */
jr ra
#define EX_OVF 12 (Kernel-mode
/* Arithmetic only) disable interrupts
overflow */
Virtual address space
0xffffffff

Kernel-mode Remember that the kernel


shares an address space
with user-mode code!
0x80000000 So, immediately after syscall
(but before kernel code has
actually started executing) . . .
User-mode
0x0
User-mode Standard Kernel-mode
address space registers address space
User foo() SP
stack Where is the
bar() PC
close_stdout() kernel stack?

EPC
Heap Heap
Static data Static data

Cause
//Code at 0x80000080
//close_stdout() mips_general_handler:
j common_exception
User li a0, 1 Status
nop //Delay slot

code li v0, 49
common_exception:
//1) Find the kernel stack.
//2) Push context of
syscall //interrupted execution
//on the stack.
jr ra //3) Jump to mips_trap()
/* kern/arch/mips/locore/exception-mips1.S
* In the context of this file, an “exception” is a trap,
* where a “trap” can be an asynchronous interrupt, or a
* synchronous system call, NULL pointer derefer, etc.*/
common_exception:
mfc0 k0, c0_status /* Get status register */
andi k0, k0, CST_KUp /* Check we-were-in-user-mode bit */
beq k0, $0, 1f /* If clear, from kernel, already
* have stack */
nop /* delay slot */
/* kern/arch/mips/include/trapframe.h */
/*
* Structure describing what is saved on the stack during
* entry to the exception handler.
2:*/
struct
/* trapframe {
uint32_t
* At thistf_vaddr;
point: /* coprocessor 0 vaddr register */
uint32_t
* tf_status;are
Interrupts /* off.
coprocessor 0 statusdid
(The processor register
this */
uint32_t
* tf_cause;
for us.) /* coprocessor 0 cause register */
uint32_t
* k0tf_lo;
contains the value for curthread, to go
uint32_t
* tf_hi;
into s7.
uint32_t
* k1tf_ra; /* Saved
contains the register
old stack 31 */
pointer.
uint32_t
* sptf_at; /* the
points into Saved register
kernel 1 (AT) */
stack.
uint32_t
* tf_v0;
All /* Savedare
other registers register 2 (v0) */
untouched.
uint32_t
*/ tf_v1; /* etc. */
...
User-mode Standard Kernel-mode
address space registers address space
User foo() SP
stack Trapframe
bar() PC
close_stdout() mips_trap(tf)

EPC
Heap Heap
Static data Static data
//close_stdout() Cause kern/arch/mips/
User li a0, 1 Status
locore/trap.c::
code li v0, 49 mips_trap(struct
syscall trapframe *tf)
jr ra
kern/arch/mips/ locore/trap.c::
mips_trap(struct trapframe *tf)
• mips_trap() extracts the reason for the trap . . .
uint32_t code = (tf->tf_cause & CCA_CODE) >> CCA_CODESHIFT;
• . . . and then calls the appropriate kernel function to handle the trap
if (code == EX_IRQ) { //Error-checking code is elided
mainbus_interrupt(tf);
goto done2;
}
if (code == EX_SYS) {
syscall(tf);
goto done;
} //. . . etc . . .
/* kern/arch/mips/syscall/syscall.c */
void
syscall(struct trapframe *tf){ /* Error-checking elided */
int callno, err;
int32_t retval;

callno = tf->tf_v0;
switch (callno) {
case SYS_reboot:
err = sys_reboot(tf->tf_a0); /* The argument is
* RB_REBOOT,
* RB_HALT, or
* RB_POWEROFF. */
break;
LOST IN A MINE
ONLY ONE COIN

MICKENS YOU HAVE


RUINED ME
User-mode Standard Kernel-mode
address space registers address space
User foo() SP
stack Trapframe
bar() PC
close_stdout() mips_trap(tf)
syscall(tf)

EPC
Heap Heap
Static data Static data
//close_stdout() Cause kern/arch/mips/
syscall/syscall.c::
User li a0, 1 Status syscall(struct
code li v0, 49
syscall trapframe *tf)
jr ra
On trap, processor
Status register when left-shifts two bits rfe will right-shift
user code runs with zero-fill two bits with one-fill

11 11 11 11 11 00 11
/* kern/arch/mips/locore/exception-mips1.S */
11 11
If kernel later enables
11 01 00
Privilege: User
jal mips_trap /* call it */ interrupts, then nested
Interrupts: Enabled
nop /* delay slot */ traps are possible
User-mode Standard
address space registers

User foo() SP
stack bar() PC What if close_stdout()
close_stdout() had wanted to check the
return value of close()?
• In this example, close_stdout() directly invoked
Heap syscall, so close_stdout() must know about
Static data the MIPS syscall conventions:
• On successful return, zero in a3 register; return
//close_stdout() value in v0 (v0 and v1 for a 64-bit return value)
User li a0, 1 • On error return, nonzero in a3 register; errno
value in v0
code li v0, 49 • In real life, developers typically invoke system calls
syscall via libc; libc takes care of handling the syscall
jr ra conventions and setting the libc errno variable
correctly
CONTEXT
SWITCHES

THEY’RE
GREAT

I GET IT
Context-switching a Thread Off The CPU
• In the previous example, a thread:
• was running in user-mode
• invoked a system call to trap into the kernel
• ran in kernel-mode using the thread’s kernel stack
• returned to user-mode without ever relinquishing the CPU
• However, kernel-mode execution might need to sleep . . .
• Ex: waiting for a lock to become available
• Ex: waiting for an IO operation to complete
• . . . so this means that we need to save the kernel-mode state,
just like we saved the user-mode state during the trap!
kern/include/thread.h
struct thread {
threadstate_t t_state; /* State this thread is in */
void *t_stack; /* Kernel-level stack: Used for
* kernel function calls, and
* also to store user-level
* execution context in the
* struct trapframe */
struct switchframe *t_context; /* Saved kernel-level
* execution context */
/* ...other stuff... */
}
Suppose that kernel-mode execution
needs to go to sleep on a wchan . . .
void wchan_sleep(struct wchan *wc, struct spinlock *lk){
/* may not sleep in an interrupt handler */
KASSERT(!curthread->t_in_interrupt);
/* must hold the spinlock */
KASSERT(spinlock_do_i_hold(lk));
/* must not hold other spinlocks */
KASSERT(curcpu->c_spinlocks == 1);

thread_switch(S_SLEEP, wc, lk); //Kernel-mode execution


//is suspended . . .
spinlock_acquire(lk); //. . . and restored again!
}
The Magic of thread_switch()
• thread_switch() will add the current thread-to-sleep to the
wc_threads list of the wchan
• Then, thread_switch() swaps in a new kernel-level execution . . .
/* do the switch (in assembler in switch.S) */
switchframe_switch(&cur->t_context, &next->t_context);
. . . where cur is the currently-executing thread-to-sleep, and next is
t the new thread to start executing
• Unlike a user-to-kernel context transition due to an interrupt, this
context switch is voluntary!
An Aside: Calling Conventions
• A calling convention determines how a compiler
implements function calls and returns
• How are function parameters passed to the callee: registers
and/or stack?
• How is the return address back to the caller passed to the
callee: registers and/or stack?
• How are function return values stored: registers and/or stack?
• Calling conventions ensure that code written by different
developers can interact!
• We’ve already seen one example: MIPS syscall convention
Calling Conventions
• Most ISAs do not mandate a
particular calling convention,
MIPS
although the ISA’s structure may t0
s0
a0
influence calling conventions t1
s1
a1 Function
• Ex: 32-bit x86 only has 8 general- t2
s2
purpose registers, so most calling
a2
t3
s3
Temp values
arguments
conventions pass function a3
t4
s4
Saved values
(caller-saved)
arguments on the stack, and pass t5
s5 (callee-saved)
return values on the stack v0
t6
s6 Function
• Ex: MIPS R3000 has 32 general- v1
t7
s7 return values
purpose registers, so passing
s8
arguments via registers is less
painful
Registers: Caller-saved vs. Callee-saved
• Caller-saved registers hold a function’s temporary values
• The callee is free to stomp on those values during execution
• If the caller wants to guarantee that a caller-saved register isn’t clobbered
by the callee, then:
• Before the call: the caller must push the register value onto the stack
• After the call: the caller must pop the register value from the stack
• Callee-saved registers hold “persistent” values
• The callee must ensure that, when the callee returns, the registers have
their pre-call value
• This means:
• At the beginning of the callee: if the callee wants to use those registers,
the callee must first push the old register values onto the stack
• When the callee returns: any callee-saved registers must be popped
from the stack into the relevant registers
• thread_switch() swaps in a new kernel-level execution . . .
/* do the switch (in assembler in switch.S) */
switchframe_switch(&cur->t_context, &next->t_context);
. . . where cur is the currently-executing thread-to-sleep, and next is the
nnew thread to start executing
• The call to switchframe_switch() automatically pushes the necessary
caller-saved registers onto the stack
• So, switchframe_switch() uses hand-coded assembly to:
• push callee-saved registers onto the stack (including ra, which contains the address of
the instruction in thread_switch() after the call to switchframe_switch())
• update cur’s struct switchframe *t_context to point to the saved registers (so
now, all of cur’s kernel-level execution context is on its kernel stack)
• change the kernel stack pointer to be next’s kernel stack pointer
• restore next’s callee-saved kernel-level execution context using next’s switchframe
• jump to the restored ra value; caller restores the caller-saved registers; next has now
returned from switchframe_switch()!
/* do the switch (in assembler in switch.S) */
switchframe_switch(&cur->t_context, &next->t_context);

/*
* When we get to here, we are either running in the next
* thread, or have come back to the same thread again,
* depending on how you look at it. That is,
* switchframe_switch returns immediately in another thread
* context, which in general will be executing here with a
* different stack and different values in the local
* variables. (Although new threads go to thread_startup
* instead.) But, later on when the processor, or some
* processor, comes back to the previous thread, it's also
* executing here with the *same* value in the local
* variables.

You might also like