Loading...
Loading...
Computer Organization: Hardware components aur unka interaction. Computer Architecture: Programmer-visible attributes (instruction set, data types, I/O mechanisms).
Basic Computer Structure:
Input → [CPU] → Output
↕
[Memory]
↕
[I/O Devices]
CPU = ALU + Control Unit + Registers
| System | Base | Digits | Example | |--------|------|--------|---------| | Binary | 2 | 0,1 | 1010₂ | | Octal | 8 | 0-7 | 12₈ | | Decimal | 10 | 0-9 | 10₁₀ | | Hexadecimal | 16 | 0-9,A-F | A₁₆ |
Conversions:
Decimal → Binary (divide by 2, read remainders upward):
45 ÷ 2 = 22 R 1
22 ÷ 2 = 11 R 0
11 ÷ 2 = 5 R 1
5 ÷ 2 = 2 R 1
2 ÷ 2 = 1 R 0
1 ÷ 2 = 0 R 1 → Read upward: 101101₂
Binary → Hex (group 4 bits):
10111101 → 1011 1101 → B D → BD₁₆
2's Complement (negative numbers):
+13 = 00001101
-13: invert → 11110010, add 1 → 11110011
Program Counter (PC): Next instruction address
Instruction Register (IR): Current instruction
Memory Address Register (MAR): Memory address to access
Memory Buffer Register (MBR): Data to/from memory
Accumulator (AC): ALU results
Stack Pointer (SP): Top of stack
Status Register (SR): Flags — Zero, Carry, Overflow, Sign
General Purpose: R0-R15 (RISC-style)
1. FETCH: PC → MAR; Memory[MAR] → MBR; MBR → IR; PC++
2. DECODE: IR decoded by Control Unit
3. EXECUTE: ALU performs operation
4. (optional) WRITEBACK: Result stored
Instruction Format (RISC 32-bit):
[Opcode 6 bits][Src1 5 bits][Src2 5 bits][Dest 5 bits][Func 11 bits]
ADD R1, R2, R3 → R1 = R2 + R3
Speed (fast→slow): Registers → L1 Cache → L2 Cache → L3 → RAM → SSD → HDD
Size (small→large): Registers → L1 → L2 → L3 → RAM → SSD → HDD
Cost (high→low): Registers → L1 → L2 → L3 → RAM → SSD → HDD
Direct Mapped: Each memory block → specific cache line
Cache line = (Block address) mod (Cache size)
Simple, fast, but conflict misses
Set Associative: Each block → set of lines (2-way, 4-way)
Compromise between direct and fully associative
Fully Associative: Block → any cache line
Least conflict, expensive, requires parallel search
Replacement Policies:
LRU (Least Recently Used) — most common
FIFO — simple
Random — sometimes works as well as LRU
Without pipeline (5 stage, 5ns each):
Instruction: 5×5 = 25ns each
With pipeline (5 stage):
After filling: 1 instruction every 5ns (5x speedup ideal)
5-stage RISC pipeline:
IF (Instruction Fetch) → ID (Instruction Decode/Register Read)
→ EX (Execute/ALU) → MEM (Memory Access) → WB (Write Back)
Pipeline Hazards:
Data Hazard (RAW — Read After Write):
ADD R1, R2, R3 → R1 = R2 + R3
SUB R4, R1, R5 → R4 = R1 - R5 (R1 not ready!)
Solution: Forwarding/Bypassing — route result directly
Stalling (NOP bubble) if forwarding not possible
Control Hazard (Branch):
BEQ R1, R2, label → don't know next PC until compare done
Solution: Branch prediction (modern CPUs > 95% accuracy)
Delayed branching (RISC)
Speculative execution
I/O Methods:
1. Programmed I/O: CPU polls device status (busy waiting)
2. Interrupt-driven: Device interrupts CPU when ready
3. DMA: Device transfers directly to memory
DMA Transfer:
CPU → DMA Controller (start addr, count, direction)
DMA ↔ Memory (direct, CPU free)
DMA → CPU (interrupt when done)
Interrupt Types:
Hardware: keyboard, disk, timer
Software: system calls (INT instruction)
Exception: divide by zero, page fault
Interrupt Handling:
1. Save CPU state (PC, registers)
2. Identify interrupt source (interrupt vector)
3. Execute ISR (Interrupt Service Routine)
4. Restore state, return
Q: RISC aur CISC mein kya fark hai? A: RISC (Reduced Instruction Set): simple instructions, fixed length, load-store architecture, pipelining easy. CISC (Complex Instruction Set): complex multi-cycle instructions, variable length. ARM = RISC, x86 = CISC (internally translates to micro-ops).
Q: Thrashing kya hai? A: Jab RAM full ho aur OS constantly pages swap karta rahe disk pe/se — CPU zyada time I/O wait mein jaata hai, actual work kam hota hai. Physical RAM badhao ya zyada processes band karo.
Q: Amdahl's Law kya kehta hai? A: Speedup limited hai by non-parallelizable part. Agar 20% code sequential hai, max speedup = 1/0.2 = 5x, chahe kitne bhi processors lao.
Complete COA notes for B.Tech IT Sem 3 — Number systems, CPU architecture, Instruction cycle, Memory hierarchy, Cache, Pipeline, I/O organization with viva questions.
42 pages · 2.1 MB · Updated 2026-03-11
CPU + Memory + I/O ek shared bus pe. Program aur data dono same memory mein. Fetch-Decode-Execute cycle. Modern computers isi pe based hain. Bottleneck: memory bandwidth (Von Neumann bottleneck).
CPU aur RAM ke beech fast SRAM memory. Locality of reference use karta hai. Hit rate > 95% aaj kal. L1 (4-64KB, ~1ns), L2 (256KB-4MB, ~5ns), L3 (8-32MB, ~30ns) vs RAM (~100ns).
Structural: resource conflict (same hardware). Data: instruction depends on previous result (RAW, WAR, WAW). Control: branch outcome unknown. Solutions: stalling, forwarding, branch prediction.
2's complement most common: positive numbers normal, negative = invert all bits + 1. 8-bit: +5 = 00000101, -5 = 11111011. Range: -128 to +127.
Direct Memory Access — I/O device directly memory se data transfer karta hai CPU ko involve kiye bina. Large data transfers (disk I/O) ke liye. CPU free rehta hai doosra kaam karne ke liye.
Java Programming — Complete Notes for B.Tech IT Semester 3
Java Programming
Web Technologies — HTML, CSS, JavaScript, Node.js Complete Notes
Web Technologies
Digital Electronics — Complete Notes IT Sem 1
Digital Electronics
Cloud Computing Notes — B.Tech IT Sem 5
Cloud Computing
Information Security Notes — B.Tech IT Sem 6
Information Security
Your feedback helps us improve notes and tutorials.