It's a power point presentation for COA memory system
Size: 34.99 KB
Language: en
Added: Oct 08, 2025
Slides: 7 pages
Slide Content
Memory Systems Syed Ammal Engineering College Computer Organization and Embedded Systems Based on: Carl Hamacher et al., 6th Edition, McGraw-Hill Prepared: Assistant-generated PPT
8.1 Basic Concepts Memory stores instructions and data for processor use. Key metrics: access time, cycle time, bandwidth, capacity, cost, and volatility. Trade-offs: speed vs. cost vs. capacity; designers use hierarchy to balance. Memory systems are designed around cost–capacity–speed tradeoffs. Registers are fastest but limited; main memory (DRAM) offers larger capacity but higher latency. Secondary storage (HDD/SSD/tape) is nonvolatile and slower. Locality (temporal/spatial) is key.
8.2 Semiconductor RAM Memories RAM provides random read/write access; typically volatile. Two primary types: Static RAM (SRAM) and Dynamic RAM (DRAM). SRAM uses flip-flops; DRAM uses capacitor-based cells requiring refresh. SRAM stores data in bistable latches; fast, no refresh, used in caches. DRAM stores bits as charge on capacitors; requires refresh, denser and cheaper, used as main memory.
8.3 Read-only Memories (ROM family) ROM types provide nonvolatile storage for firmware and microcode. Variants: ROM (mask-programmed), PROM (one-time programmable), EPROM, EEPROM, Flash. Trade-offs: programmability, erase granularity, write/endurance, and cost. Mask ROM is programmed at manufacturing; PROM is programmable once; EPROM is UV erasable; EEPROM/Flash allow electrical reprogramming. Flash requires wear-leveling and block erase management.
8.4 Direct Memory Access (DMA) DMA allows peripherals to transfer data to/from memory without CPU intervention. Techniques: cycle stealing, burst mode, and bus mastering. Improves I/O throughput and offloads CPU. A DMA controller can directly access memory for block transfers, reducing CPU workload. Cycle stealing temporarily suspends CPU; burst mode transfers blocks at once. Proper bus arbitration is required.
8.5 Memory Hierarchy Hierarchy exploits temporal and spatial locality. Faster, smaller, expensive memory closer to CPU; slower, larger, cheaper further away. Caching and virtual memory make large memory appear fast. Registers and L1 cache are fastest, smallest; main memory slower, larger; secondary storage is largest, slowest. Effective hierarchy design improves average access time.
Summary of Key Points Memory hierarchy balances speed, capacity, and cost. SRAM and DRAM target different hierarchy levels. Caches improve access time using mapping and replacement policies. Virtual memory and secondary storage extend usable memory space. DMA enables efficient I/O by offloading CPU.