






















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Multiprocessor , Applications of Multiprocessor , Benefits of using a Multiprocessor , Advantages of Multiprocessor , Disadvantages of Multiprocessor , Microprocessor and its Characteristics , Characteristics of Multiprocessor , etc.....
Typology: Study notes
1 / 30
This page cannot be seen from the preview
Don't miss anything!
Introduction of Multiprocessor Multiprocessor: A Multiprocessor is a computer system with two or more central processing units (CPUs) share full access to a common RAM. The main objective of using a multiprocessor is to boost the system’s execution speed, with other objectives being fault tolerance and application matching. There are two types of multiprocessors, one is called shared memory multiprocessor and another is distributed memory multiprocessor. In shared memory multiprocessors, all the CPUs share the common memory but in a distributed memory multiprocessor, every CPU has its own private memory. Applications of Multiprocessor –
Advantages: Improved performance: Multiprocessor systems can execute tasks faster than single-processor systems, as the workload can be distributed across multiple processors. Better scalability: Multiprocessor systems can be scaled more easily than single-processor systems, as additional processors can be added to the system to handle increased workloads. Increased reliability: Multiprocessor systems can continue to operate even if one processor fails, as the remaining processors can continue to execute tasks. Reduced cost: Multiprocessor systems can be more cost-effective than building multiple single-processor systems to handle the same workload. Enhanced parallelism: Multiprocessor systems allow for greater parallelism, as different processors can execute different tasks simultaneously. Disadvantages: Increased complexity: Multiprocessor systems are more complex than single- processor systems, and they require additional hardware, software, and management resources. Higher power consumption: Multiprocessor systems require more power to operate than single-processor systems, which can increase the cost of operating and maintaining the system. Difficult programming: Developing software that can effectively utilize multiple processors can be challenging, and it requires specialized programming skills. Synchronization issues: Multiprocessor systems require synchronization between processors to ensure that tasks are executed correctly and efficiently, which can add complexity and overhead to the system. Limited performance gains: Not all applications can benefit from multiprocessor systems, and some applications may only see limited performance gains when running on a multiprocessor system. Microprocessor and its Characteristics: A multiprocessor is a single computer that has multiple processors. It is possible that the processors in the multiprocessor system can communicate and cooperate at various levels of solving a given problem. The communications between the processors take place by sending messages from one processor to another, or by sharing a common memory.
Shared-memory multiprocessor models: The most popular parallel computers are those that implement programs in MIMD mode. There are two major types of parallel computers such as shared memory multiprocessor & message-passing multi computers. The main difference between multiprocessors & multicomputer lies in memory sharing and the structure used for interprocessor communication. The processor in the multiprocessor system communicates with each other through a shared variable in common memory. Each computer node in a multicomputer system has local memory, unshared with different nodes. Inter- process communication is done through message passing among nodes. Three shared memory multiprocessor models are as follows − UMA Model UMA stands for Uniform memory access Model. In this model, the physical memory is consistently joined by all the processors. All processors have the same access time to all memory words, that’s why it is known as uniform memory access. Each processor can use a private cache. Peripherals are also shared. UMA model is applicable for time-sharing applications by various users. It can be used to speed up the implementation of a single high program in time-critical applications. When all processors have similar access to all peripheral devices, the system is known as a symmetric multiprocessor. In this method, all the processors are uniformly adequate for the running program, including the kernel.
NUMA Model NUMA stands for Non-uniform memory access model. A NUMA multiprocessor is a shared memory system in which the access time diverges with the area of the memory word. There are two NUMA machine models are shown in the figure. The shared memory is physically shared to all processors, known as local memories. The set of all local memories forms a global address area approachable by all processors. It is quicker to create a local memory with a local processor. The approach of remote memory connected to other processors takes higher because of the added delay through the interconnection network. COMA Model COMA stands for Cache Only Memory Architecture. This model is a unique method of a NUMA machine where shared main memories are replaced with cache memory. At the individual processor node, there is no memory chain of command (hierarchy). All cache made a global address space. Depending on the interconnection network used, directories can be used to support in locating duplicates of cache blocks. An example of COMA includes the Swedish Institute of Computer Science‘s Data Diffusion machine (DDM).
Models of Memory Consistency: In a multiprocessor system where multiple threads or processors share memory, it's crucial to define the order in which memory operations are observed by different parts of the system. Memory consistency models dictate how memory operations are perceived by different threads in the system. There are several memory consistency models, each defining a set of rules for how memory reads and writes should be ordered. Some common memory consistency models include:
order of memory operations and the ease of programming and reasoning about concurrent systems. Synchronization mechanisms and memory consistency models are crucial for designing efficient and correct parallel and distributed systems. Proper synchronization ensures data integrity, while the choice of a memory consistency model defines how memory operations are observed by different parts of a multiprocessor system. These concepts are fundamental for building reliable and high-performance parallel applications. Issues of deadlock and scheduling in multiprocessors In a multiprocessor system, the issues of deadlock and scheduling play critical roles in system performance, resource utilization, and overall system stability. Let's discuss these concepts in the context of multiprocessors: Deadlock in Multiprocessors: Deadlock is a state in a system where two or more processes are unable to proceed because each is waiting for the other to release a resource, resulting in a standstill. In a multiprocessor system, where multiple processes share resources such as memory, I/O devices, or communication channels, deadlocks can occur due to the concurrent access and sharing of these resources. Some common issues related to deadlock in multiprocessors are:
Parallel processing can be described as a class of techniques which enables the system to achieve simultaneous data-processing tasks to increase the computational speed of a computer system. A parallel processing system can carry out simultaneous data-processing to achieve faster execution time. For instance, while an instruction is being processed in the ALU component of the CPU, the next instruction can be read from memory. The primary purpose of parallel processing is to enhance the computer processing capability and increase its throughput, i.e. the amount of processing that can be accomplished during a given interval of time. A parallel processing system can be achieved by having a multiplicity of functional units that perform identical or different operations simultaneously. The data can be distributed among various multiple functional units. The following diagram shows one possible way of separating the execution unit into eight functional units operating in parallel. The operation performed in each functional unit is indicated in each block if the diagram: o The adder and integer multiplier performs the arithmetic operation with integer numbers.
o The floating-point operations are separated into three circuits operating in parallel. o The logic, shift, and increment operations can be performed concurrently on different data. All units are independent of each other, so one number can be shifted while another number is being incremented. There are a variety of ways that parallel processing can be classified.
2.SIMD Computers SIMD computers contain one control unit, multiple processing units, and shared memory or interconnection network. Here, one single control unit sends instructions to all processing units. During computation, at each step, all the processors receive a single set of instructions from the control unit and operate on different set of data from the memory unit. Each of the processing units has its own local memory unit to store both data and instructions. In SIMD computers, processors need to communicate among themselves. This is done by shared memory or by interconnection network. While some of the processors execute a set of instructions, the remaining processors wait for their next set of instructions. Instructions from the control unit decides which processor will be active (execute instructions) or inactive (wait for next instruction). 3.MISD Computers As the name suggests, MISD computers contain multiple control units, multiple processing units, and one common memory unit.
Here, each processor has its own control unit and they share a common memory unit. All the processors get instructions individually from their own control unit and they operate on a single stream of data as per the instructions they have received from their respective control units. This processor operates simultaneously. 4.MIMD Computers MIMD computers have multiple control units, multiple processing units, and a shared memory or interconnection network. Here, each processor has its own control unit, local memory unit, and arithmetic and logic unit. They receive different sets of instructions from their respective control units and operate on different sets of data. Note An MIMD computer that shares a common memory is known as multiprocessors, while those that uses an interconnection network is known as multicomputers. Based on the physical distance of the processors, multicomputers are of two types − o Multicomputer − When all the processors are very close to one another (e.g., in the same room). o Distributed system − When all the processors are far away from one another (e.g.- in the different cities)
Time complexity of an algorithm can be classified into three categories− Worst-case complexity − When the amount of time required by an algorithm for a given input is maximum. Average-case complexity − When the amount of time required by an algorithm for a given input is average. Best-case complexity − When the amount of time required by an algorithm for a given input is minimum. Asymptotic Analysis The complexity or efficiency of an algorithm is the number of steps executed by the algorithm to get the desired output. Asymptotic analysis is done to calculate the complexity of an algorithm in its theoretical analysis. In asymptotic analysis, a large length of input is used to calculate the complexity function of the algorithm. Note − Asymptotic is a condition where a line tends to meet a curve, but they do not intersect. Here the line and the curve is asymptotic to each other. Asymptotic notation is the easiest way to describe the fastest and slowest possible execution time for an algorithm using high bounds and low bounds on speed. For this, we use the following notations − Big O notation Omega notation Theta notation Big O notation In mathematics, Big O notation is used to represent the asymptotic characteristics of functions. It represents the behavior of a function for large inputs in a simple and accurate method. It is a method of representing the upper bound of an algorithm’s execution time. It represents the longest amount of time that the algorithm could take to complete its execution. The function − f(n) = O(g(n)) iff there exists positive constants c and n 0 such that f(n) ≤ c * g(n) for all n where n ≥ n 0.
Omega notation Omega notation is a method of representing the lower bound of an algorithm’s execution time. The function − f(n) = Ω (g(n)) iff there exists positive constants c and n 0 such that f(n) ≥ c * g(n) for all n where n ≥ n 0. Theta Notation Theta notation is a method of representing both the lower bound and the upper bound of an algorithm’s execution time. The function − f(n) = θ(g(n)) iff there exists positive constants c 1 , c 2 , and n 0 such that c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n where n ≥ n 0. Speedup of an Algorithm The performance of a parallel algorithm is determined by calculating its speedup. Speedup is defined as the ratio of the worst-case execution time of the fastest known sequential algorithm for a particular problem to the worst- case execution time of the parallel algorithm. speedup = Worst case execution time of the fastest known sequential for a particular problem / Worst case execution time of the parallel algorithm Number of Processors Used The number of processors used is an important factor in analyzing the efficiency of a parallel algorithm. The cost to buy, maintain, and run the computers are calculated. Larger the number of processors used by an algorithm to solve a problem, more costly becomes the obtained result. Total Cost Total cost of a parallel algorithm is the product of time complexity and the number of processors used in that particular algorithm. Total Cost = Time complexity × Number of processors used
Microprocessor Work: The microprocessor follows a sequence: Fetch, Decode, and then Execute. Initially, the instructions are stored in the memory in a sequential order. The microprocessor fetches those instructions from the memory, then decodes it and executes those instructions till STOP instruction is reached. Later, it sends the result in binary to the output port. Between these processes, the register stores the temporarily data and ALU performs the computing functions. List of Terms Used in a Microprocessor Here is a list of some of the frequently used terms in a microprocessor − Instruction Set − It is the set of instructions that the microprocessor can understand. Bandwidth − It is the number of bits processed in a single instruction. Clock Speed − It determines the number of operations per second the processor can perform. It is expressed in megahertz (MHz) or gigahertz (GHz).It is also known as Clock Rate. Word Length − It depends upon the width of internal data bus, registers, ALU, etc. An 8-bit microprocessor can process 8-bit data at a time. The word length ranges from 4 bits to 64 bits depending upon the type of the microcomputer. Data Types − The microprocessor has multiple data type formats like binary, BCD, ASCII, signed and unsigned numbers. Features of a Microprocessor Here is a list of some of the most prominent features of any microprocessor − Cost-effective − The microprocessor chips are available at low prices and results its low cost. Size − The microprocessor is of small size chip, hence is portable. Low Power Consumption − Microprocessors are manufactured by using metaloxide semiconductor technology, which has low power consumption. Versatility − The microprocessors are versatile as we can use the same chip in a number of applications by configuring the software program. Reliability − The failure rate of an IC in microprocessors is very low, hence it is reliable.