lkdp-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LKDP] scheduler/initial.tex


From: KDLinux
Subject: [LKDP] scheduler/initial.tex
Date: Wed, 24 Jul 2002 05:56:34 -0700 (PDT)

Hi abhi, 
 I am working on the process scheduling.
 I have completed the init chap.
 attaching a tex file here.
 Also, I need to check in the code in the CVS.

 - KD.

=====
-- KD.
www.kirandivekar.cjb.net

__________________________________________________
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com
\chapter{Initialization}
        \section{start\_kernel Function}
                When a PC is powered on, it initializes all the hardware 
present in the system and makes sure, each hardware is up and running. So, 
after initial hardware boot sequence, kernel comes into picture by trasfering 
control to \textit{start\_kernel} function. This function is defined in the 
file \url{init/main.c}. It performs intialization of the all Operating Systems' 
Blocks and initializes the data structures used by the kernel. This function 
calls sub-functions to perform the initialization of IRQ requests, process 
scheduler, softirq systems, kernel timers, signal mechanism and SMP(symmetric 
multi-processing) machanism.

        \begin{verbatim}

                lock_kernel();
                printk(linux_banner);
                setup_arch(&command_line);
                setup_per_cpu_areas();
                printk("Kernel command line: %s\n", saved_command_line);
                parse_options(command_line);
                trap_init();
                init_IRQ();
                sched_init();
                softirq_init();
                time_init();
                /* some memory initialization code */
                fork_init();
                signals_init();
                smp_init();

        \end{verbatim}
 
                \subsection{Macro lock\_kernel}
                The macro lock\_kernel is defined in in the file 
\url{linux/smp-lock.h}. This is only defined for non-SMP systems, because there 
are no inter-CPU locks on single CPU systems. On i386 SMP systems, lock\_kernel 
is an inline function: \url{asm/i386/smplock.h}
        \begin{verbatim}
                extern __inline__ void lock_kernel(void)
                {
                        if (!++current->lock_depth)
                        spin_lock(&kernel_flag);
                }
        \end{verbatim}
 
                So on a non-SMP system, the macro expands to 'do\{\}  while(0)' 
and gets optimised away.

                \subsection{Functions trap\_init,init\_IRQ}
                The functions trap\_init and init\_IRQ are architecture 
dependant and perform the initilization of IRQ hardware. These functions are 
defined in the architecture dependant section of kernel code, viz. trap\_init 
in \url{arch/i386/traps.c} and init\_IRQ in \url{arch/i386/i8259.c}.

                \subsection{Function sched\_init}
                Process is a basic entity in any unix based system. In a 
multitasking environment, number of processes are executing on single/multiple 
CPU/CPUs. Each process gets a fair chance to execute on the CPU depending on 
the process characteristics. This allocation is done by a special process known 
as "scheduler". The process scheduler is initialized by calling the function 
sched\_init defined in \url{kernel/sched.c}

                \subsubsection{scheduler data structure}
                The scheduler uses an array of runqueues as basic data 
structure defined in \url{kernel/sched.c}. The NR\_CPUS\footnote{defined in 
\url{include/linux/threads.h}} represents the number of CPUs present in the 
system. Its value is 32 in SMP mode and 1 in non-SMP mode.
        \begin{verbatim}
        struct runqueue {
                spinlock_t lock;
                unsigned long nr_running, nr_switches, expired_timestamp;
                signed long nr_uninterruptible;
                task_t *curr, *idle;
                prio_array_t *active, *expired, arrays[2];
                int prev_nr_running[NR_CPUS];
                task_t *migration_thread;
                list_t migration_queue;
        } ____cacheline_aligned;

        static struct runqueue runqueues[NR_CPUS] __cacheline_aligned;
        \end{verbatim}
                The description of the elements of the above structure follows:
        \begin{description}
        \item[lock] \index{lock} Spinlock used by the runqueue in order to gain 
atomic access of the CPU.
        \item[nr\_running] \index{nr\_running} Total number of runnable 
processes i.e. in TASK\_RUNNING state.
        \item[task\_t curr, idle] \index{task\_t} Tasks associated with the 
current runqueue.\footnote{task\_t is typedefinition of task\_struct 
\url{include/linux/sched.h}}
        \item[prio\_array\_t arrays] \index{arrays} .
        \item[migration\_thread] \index{mthread} migration thread associated 
with the runqueue.Refer to section ~\ref{psched:structs} for more details.
        \item[migration\_queue] \index{mqueue}  migration queue associated with 
the runqueue.Refer to section ~\ref{psched:structs} for more details.

                \par Each process has a process descriptor associated with it. 
This process information is stored in struct task\_struct defined in 
\url{include/linux/sched.h}. All process descriptors are linked together by 
process list and the runqueue list links together process descriptors all the 
runnable processes. In both cases, the init\_task process descriptor acts as 
the list header.
                The sched\_init function initializes the timers by calling 
init\_timers\footnote{defined in \url{kernel/timer.c}} function and also 
initializes the bottom halves associated with the task queue (TQUEUE\_BH) and 
immediate queue (IMMEDIATE\_BH). Refer section ~\ref{int:bh} for more 
information about \texttt{bottom halves}.

        \begin{verbatim}
                init_timers();
                init_bh(TQUEUE_BH, tqueue_bh);
                init_bh(IMMEDIATE_BH, immediate_bh);
        \end{verbatim}

                \subsection{Function softirq\_init}
                The softirq\_init function initializes all the tasklets by 
calling tasklet\_init function. The concept of tasklet is introduced from 
kernel version 2.4 and the primary tasklet\_struct is defined in 
\url{include/linux/interrupt.h}. Tasklets are basically multithreaded analogue 
of Bottom Halves. Refer section ~\ref{int:tasklet} for more information about 
\texttt{Tasklets}.

                \subsection{Function time\_init}
                The time\_init is architecture dependant function used to 
initialize all the timer hardware.
                \subsection{Function signals\_init}
                The signals\_init calls function 
kmem\_cache\_create\footnote{Refer to \url{mm/slab.c} for more detailes} to 
create a signals realted cache. The name parameter passed  is \texttt{sigqueue} 
which can be found in the file \url{/proc/slabinfo}.
                \subsection{Function smp\_init}
                The smp\_init is architecture dependant function used to 
perform SMP initialization of all CPUs. The function code is invoked from 
\url{arch/i386/kernel/smpboot.c}

                \par After completing all the initialization stuff, what does 
the kernel do? Correct, it sits idle. The function \textit{cpu\_idle} is 
architecture dependant and is defined in \url{arch/i386/kernel/process.c}. The 
kernel waits for some process to get scheduled using \textit{schedule()} 
function.
        \begin{verbatim}
                while(1) {
                        while (!need_resched())
                                idle();
                        schedule();
                }
        \end{verbatim}

reply via email to

[Prev in Thread] Current Thread [Next in Thread]