Loading...
Loading...
A computer consists of five functional units:
Von Neumann Architecture:
CPU Components:
Instruction Cycle (Fetch-Decode-Execute):
1. Fetch: PC→MAR, Memory[MAR]→MDR→IR, PC++
2. Decode: Control Unit decodes IR
3. Execute: ALU performs operation
4. Store: Result stored in register or memory
An instruction = Opcode + Operand(s)
| Format | Structure | Example | |---|---|---| | Zero Address | Opcode only (Stack-based) | PUSH, POP | | One Address | Opcode + 1 operand (Accumulator) | LOAD X | | Two Address | Opcode + 2 operands | ADD R1, R2 | | Three Address | Opcode + 3 operands | ADD R1, R2, R3 |
How the operand location is specified:
| Mode | Description | Effective Address | |---|---|---| | Immediate | Operand is the value itself | No memory access | | Direct | Address field = memory address | EA = A | | Indirect | Address field = address of address | EA = M[A] | | Register | Operand in register | EA = register | | Register Indirect | Register holds address | EA = M[R] | | Indexed | EA = Index Register + Offset | EA = IR + A | | Base Relative | EA = Base Register + Displacement | EA = BR + A | | Auto-Increment | Use register, then increment | Good for arrays |
Registers → Cache (L1/L2/L3) → RAM → SSD/HDD → Cloud Storage
| Level | Speed | Size | Cost/Bit | |---|---|---|---| | Registers | ~1 ns | 32-256 bytes | Very High | | L1 Cache | ~1 ns | 32-64 KB | High | | L2 Cache | ~5 ns | 256 KB - 1 MB | High | | L3 Cache | ~20 ns | 4-32 MB | Moderate | | RAM (DRAM) | ~50-100 ns | 4-64 GB | Low | | SSD | ~0.1 ms | 256 GB - 2 TB | Very Low | | HDD | ~5-10 ms | 1-20 TB | Cheapest |
Principle of Locality:
Cache Mapping Techniques:
Direct Mapping: Each main memory block maps to exactly one cache line
Fully Associative: Block can go to any cache line
Set Associative (k-way): Cache divided into sets; block maps to a set
Replacement Policies:
Cache Performance:
Average Access Time = Hit Rate × Cache Time + Miss Rate × Main Memory Time
Miss Rate = 1 - Hit Rate
Pipelining allows overlapping multiple instruction executions to improve throughput.
Classic 5-Stage Pipeline:
IF (Instruction Fetch) → ID (Decode) → EX (Execute) → MEM (Memory) → WB (Write Back)
Speedup Formula:
Speedup = n / (1 + pipeline stages × stall cycles)
Ideal speedup = k (number of pipeline stages)
Pipeline Hazards:
| Hazard | Cause | Solution | |---|---|---| | Structural | Two instructions need same hardware | Stall or duplicate hardware | | Data (RAW) | Instruction needs result of previous | Forwarding/Bypassing, Stall | | Control (Branch) | Branch direction unknown | Branch prediction, Delayed branch |
Data Hazards (Types):
| Technique | CPU Involvement | Speed | Use | |---|---|---|---| | Programmed I/O | 100% (polling) | Slow | Simple devices | | Interrupt-Driven | Only at completion | Better | Keyboard, mouse | | DMA | Minimal (setup only) | Fast | Disk, Network |
DMA (Direct Memory Access):
| Feature | RISC | CISC | |---|---|---| | Instruction Set | Small, simple | Large, complex | | Instruction Length | Fixed | Variable | | Addressing Modes | Few | Many | | Execution | 1 clock per instruction | Multiple clocks | | Registers | Many | Few | | Examples | ARM, MIPS, RISC-V | x86, Intel | | Use | Mobile, embedded | Desktop, server |
Q1 (2023): What is the speedup of a 4-stage pipeline executing 100 instructions? Without pipeline: 100 × 4 = 400 cycles With pipeline: 4 + (100-1) × 1 = 103 cycles Speedup = 400/103 ≈ 3.88x
Q2 (2022): Calculate average access time: cache hit rate 90%, cache time 10ns, RAM time 100ns Average = 0.90 × 10 + 0.10 × 100 = 9 + 10 = 19 ns
Q3 (2023): What are the advantages of pipelining? Increases throughput (instructions per second), better CPU utilization, allows parallel instruction execution, reduces effective CPI (Cycles Per Instruction).
Complete COA notes for B.Tech CS Semester 5 — CPU organization, instruction formats, memory hierarchy, cache, pipelining, I/O organization, and RISC vs CISC.
54 pages · 2.7 MB · Updated 2026-03-11
Architecture (ISA): what the programmer sees — instruction set, addressing modes, registers. Organization: how hardware implements architecture — control signals, interfaces, memory technology.
Pipelining overlaps multiple instruction executions to improve throughput. Hazards: Structural (resource conflict), Data (dependency), Control (branch). Speedup = n stages for ideal pipeline.
Cache is a small, very fast memory between CPU and RAM. Stores recently/frequently used data. Hit: data found in cache (fast). Miss: data not in cache, fetched from RAM (slow). L1/L2/L3 are cache levels.
Software Engineering — SDLC, Agile, UML, Testing
Software Engineering
Artificial Intelligence — Search, Logic, ML Basics Complete Notes
Artificial Intelligence
Computer Networks Complete Notes — B.Tech CS Sem 5
Computer Networks
DBMS Complete Notes — B.Tech CS Sem 4
Database Management Systems
Compiler Design — Complete Notes CS Sem 6
Compiler Design
Your feedback helps us improve notes and tutorials.