In today era, we have multi-core processor in our computer systems. User expectations also rise which result to application are becoming more and more complex. To take full advantage of multi-core computing systems, we need application that use multiple thread.
Suppose our computer has only one CPU, i.e., capable of executing only one operation at a time. Now, let if the CPU has to execute a task that takes a long time for e.g. connecting to the remote and downloading file, what would happen? All other operation would be paused, means that whole machine appear unresponsive to the user. Things get even worse when that long-running operation contains a bug so it never ends. As the whole machine is unresponsive, the only thing we can do is restart the machine.
To avoid this problem, the thread is used. In current operating systems, each application runs in its own process. A process differentiate an application from other applications by giving it its own virtual memory and by ensuring that different processes can’t influence each other. Each process runs in its own thread. A thread is something like a virtualized CPU. If an application crashes or hits an infinite loop, only the application’s process is affected.
How Threading Works
A thread scheduler managed internally Multithreading, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked (for e.g., on user input) do not consume CPU time.
On a single-processor computer, a thread scheduler performs process scheduling— rapidly switching execution between each of the active threads. Under Windows, a time-slice is typically in the tens-of milliseconds region.
On a multi-processor computer, multithreading is implemented with a mixture of process scheduling and genuine concurrency, where different threads run code simultaneously on different CPUs.
Threads vs Processes
- An executing instance of a program is called a process.
- Some operating systems use the term ‘task‘ to refer to a program that is being executed.
- A process is always stored in the main memory also termed as the primary memory or random access memory.
- Therefore, a process is termed as an active entity. It disappears if the machine is rebooted.
- Several process may be associated with a same program.
- On a multiprocessor system, multiple processes can be executed in parallel.
- On a uni-processor system, though true parallelism is not achieved, a process scheduling algorithm is applied and the processor is scheduled to execute each process one at a time yielding an illusion of concurrency.
- Example: Executing multiple instances of the ‘Calculator’ program. Each of the instances are termed as a process.
- A thread is a subset of the process.
- It is termed as a ‘lightweight process’, since it is similar to a real process but executes within the context of a process and shares the same resources allotted to the process by the kernel (See kquest.co.cc/2010/03/operating-system for more info on the term ‘kernel’).
- Usually, a process has only one thread of control – one set of machine instructions executing at a time.
- A process may also be made up of multiple threads of execution that execute instructions concurrently.
- Multiple threads of control can exploit the true parallelism possible on multiprocessor systems.
- On a uni-processor system, a thread scheduling algorithm is applied and the processor is scheduled to run each thread one at a time.
- All the threads running within a process share the same address space, file descriptors, stack and other process related attributes.
- Since the threads of a process share the same memory, synchronizing the access to the shared data withing the process gains unprecedented importance.
Maintaining a responsive UI
- By running time-consuming tasks on a parallel “worker” thread, the main UI thread is free to continue processing keyboard and mouse events.
- Code that performs intensive calculations can execute faster on multicore or multiprocessor computers if the workload is shared among multiple threads in a “divide-and-conquer” strategy.
- On multicore machines, you can sometimes improve performance by predicting something that might need to be done, and then doing it ahead of time.
Allowing requests to be processed simultaneously
- On a server, client requests can arrive concurrently and so need to be handled in parallel (the .NET Framework creates threads for this automatically if we use ASP.NET, WCF, Web Services, or Remoting). This can also be useful on a client (e.g., handling peer-to-peer networking).