Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Cache Organization - Computer Organization and Design - Lecture Slides, Slides of Computer Aided Design (CAD)

The digital system design, is very helpful series of lecture slides, which made programming an easy task. The major points in these laboratory assignment are:Cache Organization, Architectural View of Memory, Cache Memory, Part of Main Memory, Multiple Tag, Block Pairs, Cache Terminology, Average Cache Access Time, Cache Misses, Simple Memory System, Locality of Reference

Typology: Slides

2012/2013

Uploaded on 04/24/2013

baijayanthi
baijayanthi 🇮🇳

4.5

(13)

171 documents

1 / 40

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Cache Organization
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28

Partial preview of the text

Download Cache Organization - Computer Organization and Design - Lecture Slides and more Slides Computer Aided Design (CAD) in PDF only on Docsity!

Cache Organization

Rehashing our terms

  • The Architectural view of memory is:
    • What the machine language sees
    • Memory is just a big array of storage
  • Breaking up the memory system into different pieces – cache, main memory (made up of DRAM) and Disk storage – is not architectural. - The machine language doesn’t know about it - The processor may not know about it - A new implementation may not break it up into the same pieces (or break it up at all).

Caching needs to be Transparent!

Cache organization

  • A cache memory consists of multiple tag/block pairs
    • Searches can be done in parallel (within reason)
    • At most one tag will match
  • If there is a tag match, it is a cache HIT
  • If there is no tag match, it is a cache MISS Our goal is to keep the data we think will be accessed in the near future in the cache
  • If a block is found in the cache  “hit”
    • Otherwise  “miss”
  • Hit rate = (# hits) / (# requests made to the cache)
    • Miss rate = 1 – Hit rate
  • Hit time = time to access the cache to see if a block is present + time to get the block to the CPU
  • Miss time (aka miss penalty) = time to replace a block in cache with one from DRAM
  • Average Cache access time = hit time + miss rate * miss penalty

Cache Terminology

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory 2 cache blocks 4 bit tag field 1 byte block size

V V

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory Is it in the cache? No valid tags

This is a Cache miss Allocate: address  tag Mem[1]  block Mark Valid

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 1 Hits: 0

lru

Check tags: 5  1 74 Cache Miss 1 01 5 150

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 2 Hits: 0

lru

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 2 Hits: 1

lru

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 2 Hits: 1

lru

Picking the most likely addresses

  • What is the probability of accessing a given memory location?
    • With no information, it is just as likely as any other address
  • Q: Are programs random?
  • A: No!
    • They tend to use the same memory locations over and over.
    • We can use this to pick the most referenced locations to put into the cache
  • A program does not access all of its data & code with equal probability
    • (not even close)
  • Principle of locality of reference:
    • Programs access a relatively small portion of their address space during any given window of time – applies to both instructions and data
    1. Temporal locality: if an item was recently used, it will probably be used again soon
    2. Spatial locality: if an item was recently referenced, nearby items will probably also be referenced soon

Locality of Reference

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 2 Hits: 1

lru

A very simple memory system

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Ld R1  M[ 1 ] Ld R2  M[ 5 ] Ld R3  M[ 1 ] Ld R3  M[ 7 ] Ld R2  M[ 7 ]

Processor Cache

tag data

R R R R

Memory

Misses: 2 Hits: 1

lru

7  5 and 7  1 (MISS!)