0% found this document useful (0 votes)
118 views6 pages

SBG Study Advance OS Unit 2

1. DTrace is a command-line tool for Windows that displays system information and events. It provides dynamic instrumentation of both user and kernel functions. 2. systrace is a tool for analyzing Android device performance. It is a wrapper for atrace, which controls userspace tracing on Android devices, and ftrace, the primary tracing mechanism in the Linux kernel. Systrace generates HTML reports of process execution across applications and the system. 3. Kprobes is a Linux kernel debugging mechanism that can be used to monitor events inside a production system. It allows dynamically breaking into any kernel routine to collect debugging and performance information non-disruptively.

Uploaded by

Shubhi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views6 pages

SBG Study Advance OS Unit 2

1. DTrace is a command-line tool for Windows that displays system information and events. It provides dynamic instrumentation of both user and kernel functions. 2. systrace is a tool for analyzing Android device performance. It is a wrapper for atrace, which controls userspace tracing on Android devices, and ftrace, the primary tracing mechanism in the Linux kernel. Systrace generates HTML reports of process execution across applications and the system. 3. Kprobes is a Linux kernel debugging mechanism that can be used to monitor events inside a production system. It allows dynamically breaking into any kernel routine to collect debugging and performance information non-disruptively.

Uploaded by

Shubhi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Unit – 2

DTrace:
DTrace (DTrace.exe) is a command-line tool that displays system information and events.
DTrace is an open-source tracing platform ported to windows. It provides dynamic
instrumentation of both user/kernel functions.

systrace:
systrace is the primary tool for analyzing Android device performance. However, it's really a
wrapper around other tools. It's the host-side wrapper around atrace, the device-side executable
that controls userspace tracing and sets up ftrace, and the primary tracing mechanism in the
Linux kernel. systrace uses atrace to enable tracing, then reads the ftrace buffer and wraps it all
in a self-contained HTML viewer.
systrace helps you analyze how the execution of your application fits into the larger Android
environment, letting you see system and applications process execution on a common timeline.
The tool allows you to generate highly detailed, interactive reports from devices running Android
4.1 or higher

Kprobes:
Probes is a debugging mechanism for the Linux kernel which can also be used for monitoring
events inside a production system. You can use it to weed out performance bottlenecks, log
specific events, trace problems etc.
Kprobes enables you to dynamically break into any kernel routine and collect debugging and
performance information non-disruptively. A kprobe can be inserted on virtually any instruction
in the kernel. A return probe fires when a specified function returns.

Loading and Linking:


The first step in the creation of an active process is to load a program into main memory and
create a process image (see Figure 7.13). Figure 7.14 depicts a scenario typical for most systems.
The application consists of several compiled or assembled modules in object-code form. These
are linked to resolve any references between modules. At the same time, references to library
routines are resolved. The library routines themselves may be incorporated into the program or
referenced as shared code that must be supplied by the operating system at run time. In this
appendix, we summarize the key features of linkers and loaders. For clarity in the presentation,
we begin with a description of the loading task when a single program module is involved; no
linking is required.

Difference Between Loading and Linking:


1. The key difference between linking and loading is that the linking generates the executable
file of a program whereas, the loading loads the executable file obtained from the linking into
main memory for execution.
2. The linking intakes the object module of a program generated by the assembler. However,
the loading intakes the executable module generated by the linking.
3. The linking combines all object modules of a program to generate executable modules it also
links the library function in the object module to built-in libraries of the high-level
programming language. On the other hand, loading allocates space to an executable module
in main memory.

ELF or Executable and Linkable Format:


ELF or Executable and Linkable Format is a common standard file format for executables in
Linux Systems, compared to other executables formats ELF is way flexible, and it is not bound
to any processor or instruction set architecture, ELF format has replaced older formats such as
COFF(Common Object File Format) in *NIX like operating systems.
The Executable and Linkable Format (ELF) is the main format used in most Unix-based
operating systems, including Linux. Every ELF file begins with the hex number 0x7F and
continues with the ELF string. ELF executable files contain executable code, sometimes referred
to as text, and data.

Internals of linking and dynamic linking:


Linking is the process of connecting all the modules or the function of a program for program
execution. It takes more than one object module and combines it into a single object file. The
linker, also known as the link editor, takes object modules from the assembler and forms an
executable file for the loader. Thus, as the name suggests, linking is the process of collecting and
maintaining pieces of data and code into a single file.
Static Linking
Static linking is done at the time of compilation of the program. It takes the collection of object
files and the command-line arguments and generates the fully linked object file that is loaded and
executed.
Dynamic Linking
Dynamic linking is another technique that intends to reduce the shortcomings of static linking.
With static linking, the user ends up copying functions or routines that are repetitive across
various executables. Thus, static linking becomes inefficient. For instance, nearly every program
needs printf() function. Thus, a copy of it is present in all executables which wastes space in both
virtual memory and in the file system.
In the dynamic linking approach, the linker does not copy the routines into the executables. It
takes note that the program has a dependency on the library. When the program executes, the
linker binds the function calls in the program to the shared library present in the disk.
Spinlock:
It is a locking mechanism. It enables a thread to wait for the lock to become ready,
i.e., the thread can wait in a loop or spin until the lock is ready. It is only held for a
short time, and it is useful in a multiprocessor system. The thread holds the
spinlock until it is released after acquiring the lock. In some implementations, the
spinlock is automatically released if the thread holding the lock is blocked or goes
to sleep state.
A spinlock also avoids the overhead caused by OS process rescheduling or context
switching. Furthermore, the spinlock is an effective method to block the threads
temporarily. As a result, spinlocks are used in most of the operating system
kernels. However, if a thread keeps a spinlock for an extended period of time, it
may prevent other threads from executing. In this case, the other threads repeatedly
try to acquire the lock, while the thread holding the lock doesn't begin to release it.
Generally, it may mainly occur in single-processor systems.
Spinlock is a low-level synchronization method. It's simple and quick to install.
However, it wastes system resources.

Open Solaris Adaptive Mutexes:


Adaptive mutexes  are used to protect critical/shared data items that are held for short periods
(less than a few hundred instructions).

On multiple CPU systems, if a thread encounters a locked adaptive mutex instead of


automatically blocking as a semaphore would do. The OS checks to see if the process holding
the adaptive mutex is currently running. If so, then the adaptive mutex acts like a spinlock (busy
waits) until the adaptive mutex is unlocked; otherwise the adaptive mutex acts like a semaphore
and blocks the process. This is done to reduce the overhead associated with context switching.

Solaris provides Read-Write lock to protect the data are frequently accessed by long section of
code usually in read-only manner.
It uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or
read-writer lock. Turnstile is a queue structure containing threads blocked on a lock. They are
per lock holding thread, not per object. Turnstiles are organized according to priority-
inheritance which gives the running thread the highest of the priorities of the threads in its
turnstiles to prevent priority inversion.
Locking mechanisms are used by kernel is also used by user-level threads, so that the locks are
available both inside and outside of the kernel. The difference is only that priority-inheritance in
only used in kernel, user-level thread does not provide this functionality.

If a user-level task requests I/O, the associated LWP and kernel-level threads will also be
blocked.

Advantages:

 Lightweight switching between user-level threads without invoking the kernel


 threads can block on a kernel system call while still allowing concurrent execution of
concurrent threads (provided that unblocked LWPs (Light Weight Processes) are
available)

Pre-emptive Kernels:
A computer system operates in two modes: kernel mode and user mode. Kernel mode is a
more privileged mode than the user mode. In kernel mode, the programs can directly access
the memory and hardware resources while in user mode, the program cannot directly access
memory and hardware resources.

Pre-emptive Kernel is a kernel that allows interrupting a program in the middle of the
executing. In other words, the kernel is capable of stopping the execution of the currently
running process and allowing some other process to execute. As pre-emptive kernel does not
allow the processor to run a process for a long time continuously, this type of kernel is more
secure. e.g., Windows XP, 2000, Solaris and IRIX.

Advantage:
A pre-emptive kernel is more suitable for real-time programming, as it will allow a real-time
process to pre-empt a process currently running in the kernel. Furthermore, a pre-emptive
kernel may be more responsive, since there is less risk that a kernel-mode process will run for
an arbitrarily long period before relinquishing the processor to waiting processes.

You might also like