Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Computer architecture and organization, Study notes of Computer Architecture and Organization

All notes available from computer architecture...

Typology: Study notes

2023/2024

Available from 11/22/2024

satabdi-saikia-2
satabdi-saikia-2 🇮🇳

2 documents

1 / 17

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
MODULE 1:
Functional blocks of a computer: CPU, memory, input-output subsystems, control unit
Functional Units---A computer consists of five functionally independent main parts: input, memory,
arithmetic and logic, output, and control units.
Input Unit
Computers accept coded information through input units. The most common input device is the
keyboard. Whenever a key is pressed, the corresponding letter or digit is automatically translated
into its corresponding binary code and transmitted to the processor.Many other kinds of input
devices for human-computer interaction are available, including the touchpad,mouse, joystick, and
trackball. These are often used as graphic input devices in conjunction with displays.Microphones
can be used to capture audio input which is then sampled and converted into digital codesfor
storage and processing.Similarly, cameras can be used to capture video input.Digital communication
facilities, such as the Internet, can also provide input to a computer from othercomputers and
database servers.
Central Processing Unit (CPU) :Once the information is entered into the computer by the input
device, the processor processes it. The CPU is called the brain of the computer because it is the
control center of the computer. It first fetches instructions from memory and then interprets them
so as to know what is to be done. If required, data is fetched from memory or input device.
Thereafter CPU executes or performs the required computation and then either stores the output or
displays on the output device. The CPU has three main components which are responsible for
different functions Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers
Arithmetic and Logic Unit (ALU) : The ALU, as its name suggests performs mathematical calculations
and takes logical decisions. Arithmetic calculations include addition, subtraction, multiplication and
division. Logical decisions involve comparison of two data items to see which one is larger or smaller
or equal.
Memory Unit--The function of the memory unit is to store programs and data. There are two
classes of storage, called primary and secondary.
Primary Memory ---- also called main memory, is a fast memory that operates at electronic speeds.
Programs must be stored in this memory while they are being executed. The memory consists of a
large number of semiconductor storage cells, each capable of storing one bit of information. These
cells are rarely read or written individually.Instead, they are handled in groups of fixed size called
words. The memory is organized so that one word can be stored or retrieved in one basic operation.
The number of bits in each word is referred to as the word length of the computer, typically 16, 32,
or 64 bits.To provide easy access to any word in the memory, a distinct address is associated with
each wordlocation. Addresses are consecutive numbers, starting from 0, that identify successive
locations.Instructions and data can be written into or read from the memory under the control of
the processor. A memory in which any location can be accessed in a short and fixed amount of time
after specifying its address is called a random-access memory (RAM). The time required to access
one word is called the memory access time. This time is independent of the location of the word
being accessed. It typically ranges from a few nanoseconds (ns) to about 100 ns for current RAM
units
Cache Memory--As an adjunct to the main memory, a smaller, faster RAM unit, called a cache, is
used to hold sections of a program that are currently being executed, along with any associated data.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Computer architecture and organization and more Study notes Computer Architecture and Organization in PDF only on Docsity!

MODULE 1:

Functional blocks of a computer: CPU, memory, input-output subsystems, control unit

Functional Units---A computer consists of five functionally independent main parts: input, memory, arithmetic and logic, output, and control units.

Input Unit

Computers accept coded information through input units. The most common input device is the keyboard. Whenever a key is pressed, the corresponding letter or digit is automatically translated into its corresponding binary code and transmitted to the processor.Many other kinds of input devices for human-computer interaction are available, including the touchpad,mouse, joystick, and trackball. These are often used as graphic input devices in conjunction with displays.Microphones can be used to capture audio input which is then sampled and converted into digital codesfor storage and processing.Similarly, cameras can be used to capture video input.Digital communication facilities, such as the Internet, can also provide input to a computer from othercomputers and database servers.

Central Processing Unit (CPU) : Once the information is entered into the computer by the input

device, the processor processes it. The CPU is called the brain of the computer because it is the control center of the computer. It first fetches instructions from memory and then interprets them so as to know what is to be done. If required, data is fetched from memory or input device. Thereafter CPU executes or performs the required computation and then either stores the output or displays on the output device. The CPU has three main components which are responsible for different functions – Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers Arithmetic and Logic Unit (ALU) : The ALU, as its name suggests performs mathematical calculations and takes logical decisions. Arithmetic calculations include addition, subtraction, multiplication and division. Logical decisions involve comparison of two data items to see which one is larger or smaller or equal.

Memory Unit-- The function of the memory unit is to store programs and data. There are two

classes of storage, called primary and secondary. Primary Memory ---- also called main memory, is a fast memory that operates at electronic speeds. Programs must be stored in this memory while they are being executed. The memory consists of a large number of semiconductor storage cells, each capable of storing one bit of information. These cells are rarely read or written individually.Instead, they are handled in groups of fixed size called words. The memory is organized so that one word can be stored or retrieved in one basic operation. The number of bits in each word is referred to as the word length of the computer, typically 16, 32, or 64 bits.To provide easy access to any word in the memory, a distinct address is associated with each wordlocation. Addresses are consecutive numbers, starting from 0, that identify successive locations.Instructions and data can be written into or read from the memory under the control of the processor. A memory in which any location can be accessed in a short and fixed amount of time after specifying its address is called a random-access memory (RAM). The time required to access one word is called the memory access time. This time is independent of the location of the word being accessed. It typically ranges from a few nanoseconds (ns) to about 100 ns for current RAM units Cache Memory--As an adjunct to the main memory, a smaller, faster RAM unit, called a cache, is used to hold sections of a program that are currently being executed, along with any associated data.

The cache is tightly coupled with the processor and is usually contained on the same integrated- circuit chip. The purpose of the cache is to facilitate high instruction execution rates.At the start of program execution, the cache is empty. As execution proceeds, instructions are fetched into the processor chip, and a copy of each is placed in the cache. When the execution of an instruction requires data, located in the main memory, the data are fetched and copies are also placed in the cache. If these instructions are available in the cache, they can be fetched quickly during the period of repeated use. Secondary Storage--Although primary memory is essential, it tends to be expensive and does not retain information when power is turned off. Thus additional, less expensive, permanent secondarystorage is used when large amounts of data and many programs have to be stored, particularly for information that is accessed infrequently. Access times for secondary storage are longer than for primary memory. The devices available are including magnetic disks, optical disks (DVD and CD), and flash memory devices.

Output Unit-- Output unit function is to send processed results to the outside world. A familiar

example of such adevice is a printer. Most printers employ either photocopying techniques, as in laser printers, or ink jet streams. Such printers may generate output at speeds of 20 or more pages per minute. However, printers are mechanical devices, and as such are quite slow compared to the electronic speed of a processor.Some units, such as graphic displays, provide both an output function, showing text and graphics, and an input function, through touchscreen capability. The dual role of such units is the reason for using the single name input/output (I/O) unit in many cases.

Control Unit --The memory, arithmetic and logic, and I/O units store and process information and

perform input and output operations. The operation of these units must be coordinated in some way. This is the responsibility of the control unit. The control unit is effectively the nerve center that sends control signals to other units and senses their states.I/O transfers, consisting of input and output operations, are controlled by program instructions thatidentify the devices involved and the information to be transferred.Control circuits are responsible for generating the timing signals that govern the transfers. They determine when a given action is to take place. Data transfers between the processor and the memory are also managed by the control unit through timing signals. A large set of control lines (wires) carries the signals used for timing and synchronization of events in all units. The operation of a computer can be summarized as follows:

  • The computer accepts information in the form of programs and data through an input unit and stores it in the memory.
  • Information stored in the memory is fetched under program control into an arithmetic and logic unit,where it is processed.
  • Processed information leaves the computer through an output unit.
  • All activities in the computer are directed by the control unit. Instruction set architecture of a CPU–registers, instruction execution cycle, RTL interpretation of instructions, addressing modes, instruction set REGISTERS- - The processor provides 16 registers for use in general system and application programming. These registers can be grouped as follows:
  • General-purpose data registers. These eight registers are available for storing operands and pointers.
  • Segment registers. These registers hold up to six segment selectors.

2. Instruction Execution Cycle The instruction execution cycle (or fetch-decode-execute cycle) is the fundamental process by which a CPU executes instructions. It consists of a series of steps that are repeated for each instruction in a program. The cycle includes these steps: 1. Fetch : The CPU retrieves the instruction from memory at the address pointed to by the Program Counter (PC). It places the instruction in the Instruction Register (IR) and increments the PC. 2. Decode : The control unit decodes the instruction in the IR to determine which operation to perform and the operands needed. 3. Execute : The decoded instruction is executed by the CPU. This might involve arithmetic or logical operations, memory access, or control instructions. 4. Writeback : Any results generated by the operation are written back to the appropriate destination, such as a register or memory location. Each of these steps requires different parts of the CPU, such as the ALU, registers, and control unit. This cycle repeats for each instruction, allowing the CPU to carry out the instructions in a program sequentially (or out-of-order in modern processors with optimized pipelines). 3. RTL (Register Transfer Language) Interpretation of Instructions Register Transfer Language (RTL) is a symbolic notation used to describe the low-level operations performed by the CPU as it executes instructions. RTL describes how data moves between registers, memory, and the ALU, and is particularly useful for understanding the exact operations at each stage of the instruction cycle. For example:  An instruction like ADD R1, R2, R3 (add contents of R2 and R3, store in R1) might have the following RTL interpretation: o R1 ← R2 + R RTL statements describe the transfers and transformations on data at the register level, revealing what each part of the CPU does for a given instruction. RTL helps computer architects understand and design the internal data flow of the CPU, providing a foundation for hardware implementation and optimization. Register Transfer language: Digital systems are composed of modules that are constructed from digital components, such as registers, decoders, arithmetic elements, and control logic The modules are interconnected with common data and control paths to form a digital computer system

The operations executed on data stored in registers are called microoperations A microoperation is an elementary operation performed on the information stored in one or more registers Examples are shift, count, clear, and load Some of the digital components from before are registers that implement microoperations The internal hardware organization of a digital computer is best by specifying The set of registers it contains and their functions The sequence of microoperations performed on the binary information stored The control that initiates the sequence of microoperations Use symbols, rather than words, to specify the sequence of microoperations The symbolic notation used is called a register transfer language A programming language is a procedure for writing symbols to specify a given computational process Define symbols for various types of microoperations and describe associated hardware that can implement the microoperations Register Transfer Designate computer registers by capital letters to denote its function.The register that holds an address for the memory unit is called MAR.The program counter register is called PC.IR is the instruction register and R1 is a processor register .The individual flip-flops in an n-bit register are numbered in sequence from 0 to n-

4. Addressing Modes Addressing modes specify how the CPU should interpret the operands of an instruction. They determine how the CPU locates data in memory or registers, offering flexibility and efficiency in programming. Common addressing modes include: 1. Immediate Addressing : The operand is specified directly within the instruction itself. Useful for constants. o Example: MOV R1, #5 (move the value 5 directly into R1). 2. Direct Addressing : The operand is located at a specific memory address, which is provided within the instruction. o Example: MOV R1, 0x1000 (move the data at address 0x1000 into R1). 3. Indirect Addressing : The address of the operand is specified in a register, which indirectly points to the actual data location. o Example: MOV R1, [R2] (move data at the address stored in R2 into R1). 4. Register Addressing : The operand is located in a register specified by the instruction. o Example: ADD R1, R2, R3 (add values in R2 and R3, store in R1).

instruction set architecture (isa) defines the set of operations a cpu can perform, forming the basis of the cpu’s design and affecting both the software that can run on it and the hardware’s complexity. different cpus, such as those from intel, arm, and mips, offer distinct instruction sets. this case study explores these three architectures—x86 (intel and amd), arm (widely used in mobile and embedded systems), and mips (common in academic and embedded environments)—to illustrate the diversity in cpu instruction sets and their impacts on performance, power consumption, and application.

  1. x86 architecture (intel and amd) the x86 architecture is one of the oldest and most widely used instruction sets, developed by intel in the late 1970s. it is based on complex instruction set computing (cisc), which provides a wide variety of instructions that can perform multiple steps within a single instruction. the x86 isa has evolved through numerous versions (e.g., x86, x86-32, x86-64), adding new features and extending the instruction set for modern computing needs. key features of x86 instruction set:  cisc design philosophy: the x86 isa includes many instructions that perform complex operations, allowing for compact code but requiring more complex decoding. instructions can vary in length from 1 to 15 bytes, enabling a rich set of operations but making pipelining and decoding more challenging.  data movement instructions: x86 provides multiple ways to move data between registers, memory, and the stack, including mov for general data transfer and specialized instructions like push and pop for stack management.  arithmetic and logic instructions: instructions include basic operations like add, sub, and mul, as well as advanced operations such as imul (integer multiplication) and div for division. x also supports bitwise operations (and, or, xor) and shifts (shl, shr).  control flow instructions: the x86 isa includes jump and branch instructions for altering program flow. jmp for unconditional jumps, jz/jnz for conditional jumps based on zero/non- zero flags, and call/ret for function calls. branch prediction is essential for high performance due to the varied length and complexity of instructions.  floating-point instructions: x86 includes an extensive set of floating-point instructions, initially supported by the x87 co-processor and later integrated directly into the cpu. these instructions handle single, double, and extended precision operations, supporting both scalar and simd (single instruction, multiple data) operations.  simd extensions (mmx, sse, avx): over the years, x86 has added simd instructions to handle multimedia processing more efficiently. these include mmx (for integer operations), sse (for floating-point), and avx (for larger registers and parallelism). avx-512, a recent addition, offers 512-bit simd operations. applications of x86 isa: due to its flexibility and backward compatibility, x86 is widely used in desktop, laptop, and server processors. it provides a broad instruction set, which can make it more power-hungry but versatile. intel and amd continually optimize the x86 design to enhance performance for complex tasks like gaming, data processing, and software development.
  1. arm architecture arm architecture is based on the reduced instruction set computing (risc) philosophy. it was developed by arm holdings and is widely used in mobile devices, embedded systems, and iot devices due to its power efficiency and straightforward design. arm isa comes in multiple versions, including armv7 (32-bit) and armv8 (64-bit), and recent iterations (like armv9) continue to enhance performance and security features. key features of arm instruction set:  (^) risc design philosophy: arm uses a simplified, fixed-length 32-bit instruction format (16-bit for thumb instructions), leading to easier decoding and efficient pipelining. risc architecture generally has fewer, simpler instructions, reducing power consumption and increasing processing efficiency.  data movement instructions: arm instructions include ldr (load), str (store), and mov (move) for data handling. arm employs load-store architecture, meaning data is moved to/from registers before performing arithmetic, reducing direct memory manipulation.  arithmetic and logic instructions: arm supports basic arithmetic (add, sub) and logic (and, orr, eor) instructions. multiplication and division are handled by mul and div. arm also supports conditionally executed instructions, which allow certain instructions to execute based on the status flags, reducing the need for branching.  branching and control flow instructions: arm includes both conditional (b, bl) and unconditional (beq, bne) branch instructions. these instructions rely on the condition codes, which are typically set by arithmetic and logic operations. arm’s use of conditional execution minimizes the performance cost of branches.  simd and vector processing: arm's neon technology provides simd processing capabilities, allowing for parallel processing of multimedia tasks, such as image processing and cryptographic operations, making it suitable for high-performance mobile applications.  power efficiency and thumb instructions: arm introduced the thumb instruction set, which uses 16-bit instructions to reduce code size, enhancing efficiency for embedded systems. arm processors use power-efficient designs, making arm a popular choice for battery- powered devices. applications of arm isa: arm's power efficiency and scalable design make it a staple in smartphones, tablets, and embedded systems. companies like apple (with its a-series and m-series chips), qualcomm, and samsung utilize customized arm architectures to provide high-performance and low-power cpus for mobile and consumer electronics markets. arm’s flexibility allows it to balance performance and efficiency for both general-purpose and specialized computing.
  2. mips architecture mips (microprocessor without interlocked pipeline stages) is another risc-based architecture, developed by mips technologies. it has a simpler and more predictable instruction set, commonly

feature x86 (intel/amd) arm mips branch prediction advanced moderate simple power efficiency moderate high moderate typical applications desktops, servers mobile, embedded embedded, education Data representation: signed number representation, fixed and floating point representations, character representation

  1. Signed Number Representation Signed number representation allows computers to encode both positive and negative values. Since binary format only naturally represents positive numbers, specific techniques are needed to denote negative values. Common Techniques for Signed Number Representation:  Sign-and-Magnitude Representation: o In this method, the leftmost bit (most significant bit) is used as the sign bit, with 0 representing positive and 1 representing negative. The remaining bits represent the magnitude (absolute value) of the number. o Example: In an 8-bit format, 00000101 represents +5, while 10000101 represents -5. o Drawback: Sign-and-magnitude representation has two representations for zero (00000000 for +0 and 10000000 for -0), which can complicate computations.  One’s Complement Representation: o Here, negative numbers are represented by inverting all bits of their positive counterparts. The leftmost bit is still a sign bit (0 for positive, 1 for negative). o Example: For +5, one’s complement would be 00000101, while -5 would be represented as 11111010. o Drawback: Like sign-and-magnitude, one’s complement has two representations of zero (00000000 for +0 and 11111111 for -0).  Two’s Complement Representation: o In two’s complement, negative numbers are represented by inverting all bits of the positive number and then adding one to the result. The leftmost bit acts as the sign bit (0 for positive, 1 for negative). o Example: To represent -5, start with the binary for 5 (00000101), invert it (11111010), and add one to get 11111011. o Advantage: Two’s complement has only one representation for zero and simplifies arithmetic operations since addition and subtraction work directly with binary values.

Two’s complement is the most widely used method for representing signed integers in modern computers due to its efficient handling of addition, subtraction, and zero representation.

  1. Fixed-Point and Floating-Point Representations These two techniques represent real (fractional) numbers, allowing computers to work with non- integer values. Fixed-Point Representation:  Fixed-point representation stores numbers with a specific number of digits (bits) allocated for the integer part and the fractional part.  In fixed-point binary notation, a designated binary point separates the integer bits from the fractional bits. The location of the binary point is fixed, meaning the number of fractional bits is constant.  Example: In an 8-bit fixed-point system with 4 bits for the integer part and 4 bits for the fractional part, 0010.1100 would represent 2.75 in decimal (binary 0010 is 2, and .1100 is 0.75). Fixed-point representation is often used in embedded systems where precision and range are limited and predictable. However, it’s less flexible than floating-point representation when working with numbers of varying magnitudes. Floating-Point Representation:  Floating-point representation uses scientific notation in binary, where a number is represented as ±mantissa × 2^exponent. The format follows the IEEE 754 standard, commonly used in computing.  The IEEE 754 single-precision (32-bit) and double-precision (64-bit) standards are most widely adopted. In IEEE 754 single precision: o Sign Bit: 1 bit for the sign (0 for positive, 1 for negative). o Exponent: 8 bits for the exponent, stored in "biased" form, where 127 is added to the exponent to avoid negative values. o Mantissa: 23 bits for the mantissa (or significand), which represents the precision of the number.  Example: o In IEEE 754, the binary number -13.75 might be represented as 11000001010111000000000000000000, where the first bit (1) denotes the sign, the next 8 bits are the exponent, and the remaining 23 bits represent the mantissa. Floating-point representation is advantageous because it provides a wide dynamic range, making it suitable for scientific, engineering, and graphics applications. It does have some precision limitations, especially with very large or small numbers, which can introduce rounding errors.

Computer arithmetic – integer addition and subtraction, ripple carry adder, carry look ahead adder.. Integer Addition and Subtraction Computers perform integer arithmetic using binary numbers, specifically two’s complement representation for signed integers. In two’s complement, positive and negative numbers are represented in a way that simplifies addition and subtraction, enabling the same circuitry to handle both operations without distinguishing between positive and negative signs explicitly. Binary Addition: Binary addition operates similarly to decimal addition, with bitwise addition using the following basic rules:  0 + 0 = 01 + 0 = 10 + 1 = 11 + 1 = 0 with a carry of 1 In cases where two bits add up to 2 in binary (i.e., 10), the 0 is placed in the sum, and 1 is carried to the next higher bit. This carry propagation can potentially slow down the operation, especially for large binary numbers. Binary Subtraction: Binary subtraction is typically performed by adding the two’s complement of the subtrahend (the number being subtracted) to the minuend (the number being subtracted from). To find the two’s complement:

  1. Invert all the bits of the number.
  2. Add 1 to the inverted number. For example, to calculate 5 - 3 in binary:  Represent 5 as 0101 and 3 as 0011.  Take the two’s complement of 3 to get 1101.  Add 0101 and 1101 to obtain 1 0010, where the leftmost 1 is discarded in a fixed-bit system, yielding 0010 (binary 2), which is the correct result. 2. Ripple Carry Adder A ripple carry adder is a simple, basic circuit used to perform binary addition on two binary numbers. It consists of a series of full adders connected in sequence, where each full adder adds a single bit from each number, taking into account the carry bit from the previous stage. How Ripple Carry Adder Works:

 Each bit position is handled by one full adder. The carry output from each full adder is passed as the carry input to the next higher bit position.  This process continues until the final full adder produces the sum and final carry-out for the most significant bit. For example, adding two 4-bit binary numbers 1101 and 1011:

  1. The least significant bit (LSB) adders compute 1 + 1 = 0 with a carry of 1.
  2. This carry is added to the next bit, and the process continues until all bits are processed. Limitations of Ripple Carry Adder:  (^) Propagation Delay : The primary drawback is the carry propagation delay , where the carry signal must travel through each full adder in sequence. For an n-bit adder, the delay is proportional to n.  For large numbers, this delay can be significant, making ripple carry adders slower compared to other types of adders. Despite its simplicity, the ripple carry adder is not used in high-performance processors due to this delay. However, it remains useful in applications where speed is less critical. 3. Carry Look-Ahead Adder A carry look-ahead adder (CLA) addresses the speed limitations of the ripple carry adder by reducing the carry propagation delay. Instead of waiting for the carry to ripple through each bit position, the CLA adder uses logic to calculate the carry bits in advance. How Carry Look-Ahead Adder Works: The carry look-ahead adder uses two main functions for each bit:
  3. Generate (G) : A bit position generates a carry if both bits being added are 1. o Formula: Gi=Ai⋅BiG_i = A_i \cdot B_iGi =Ai ⋅Bi
  4. Propagate (P) : A bit position propagates a carry from a previous bit if at least one of the bits is 1. o Formula: Pi=Ai+BiP_i = A_i + B_iPi =Ai +Bi The carry-out for each bit can then be calculated based on the generate and propagate functions:  Carry-Out Formula : Ci+1=Gi+(Pi⋅Ci)C_{i+1} = G_i + (P_i \cdot C_i)Ci+1 =Gi +(Pi ⋅Ci ) Using these formulas, a carry look-ahead adder can compute the carry for each bit position independently and simultaneously, drastically reducing the delay caused by carry propagation. Example of Carry Look-Ahead Computation: For a 4-bit adder, let’s compute the carry-out at each bit position:
  5. Compute the generate and propagate terms for each bit.

o Uses Booth’s algorithm , which reduces the number of additions by encoding consecutive 1s in the multiplier. Instead of adding multiple times for consecutive bits, Booth’s algorithm only adds or subtracts the multiplicand when necessary. o Useful for signed binary numbers and reduces computation by skipping certain bits.

  1. Carry-Save Multiplier : o Optimizes multiplication by minimizing the carry propagation delay. o Instead of immediately propagating carries, the carry-save multiplier uses carry-save adders to accumulate partial products in a carry-save form (separately storing carry and sum). o After all partial products are calculated, a final addition resolves the carries. This technique is particularly useful in high-speed multipliers and in ALUs for faster performance. Division Techniques Division is more complex than multiplication and requires specific algorithms for efficient computation. There are two main techniques used in computer systems:
  2. Restoring Division : o In each step, the divisor is subtracted from the current remainder. o If the result is positive, a 1 is placed in the quotient, and the remainder is updated. o If the result is negative, the previous result is "restored" by adding back the divisor, and a 0 is placed in the quotient. o It is simple to implement but can be slower because it requires restoring the value when the subtraction is negative.
  3. Non-Restoring Division : o Similar to restoring division but avoids restoring the remainder when a subtraction results in a negative value. o Instead, it keeps track of the sign and adds the divisor in the next step if the previous subtraction was negative. o This method eliminates the restoring step, making it faster in some cases and is often used in digital hardware implementations. Floating Point Arithmetic Floating-point arithmetic allows computers to handle real numbers with fractional components and is standardized by the IEEE 754 format.  Representation : o A floating-point number is represented in the form: ±mantissa × 2^exponent.

o The IEEE 754 standard defines single-precision (32-bit) and double-precision (64-bit) formats. o The 32-bit format includes 1 sign bit, 8 bits for the exponent (biased by 127), and 23 bits for the mantissa (or fraction), which gives it a wide dynamic range.  Operations : o Addition and Subtraction : Align exponents, then perform addition or subtraction on the mantissas, adjusting the result if necessary to maintain normalized form. o Multiplication and Division : Multipliers involve adding exponents and multiplying mantissas, while division subtracts exponents and divides mantissas.