Loading...
Loading...
Artificial Intelligence is the simulation of human intelligence in machines — enabling them to learn, reason, solve problems, perceive, and understand language.
AI Approaches (Russell & Norvig):
| Approach | Goal | |---|---| | Think like humans | Cognitive modeling | | Act like humans | Turing Test | | Think rationally | Laws of thought | | Act rationally | Rational agent |
Rational Agent: An agent that acts to achieve the best expected outcome given its percepts.
PEAS Framework: Performance measure, Environment, Actuators, Sensors.
Environment Types:
BFS (Breadth-First Search):
DFS (Depth-First Search):
Depth-Limited Search: DFS with depth limit l. Avoids infinite paths.
Iterative Deepening DFS (IDDFS): Repeats DLS with increasing depth limit.
Uniform Cost Search: Expands lowest path-cost node. Optimal for varying costs.
Comparison:
| Algorithm | Complete | Optimal | Time | Space | |---|---|---|---|---| | BFS | Yes | Yes (unit) | O(b^d) | O(b^d) | | DFS | No | No | O(b^m) | O(bm) | | IDDFS | Yes | Yes (unit) | O(b^d) | O(bd) | | UCS | Yes | Yes | O(b^C*/ε) | O(b^C*/ε) |
b = branching factor, d = depth, m = max depth, C* = optimal cost
Greedy Best-First Search: Expand node with lowest h(n) (heuristic estimate to goal).
A Search:* Expand node with lowest f(n) = g(n) + h(n)
Admissible Heuristic: h(n) ≤ true cost to goal Consistent/Monotone: h(n) ≤ c(n,a,n') + h(n')
Common Heuristics:
Statements are True or False. Uses connectives: ¬, ∧, ∨, →, ↔
Inference Rules:
Extends propositional with: Objects, Predicates, Functions, Quantifiers
Example: "Every student studies" → ∀x Student(x) → Studies(x)
Knowledge-based systems that mimic human expert decision making.
Components:
Machine Learning: System learns from data without being explicitly programmed.
Types of ML:
| Type | Training Data | Goal | Example | |---|---|---|---| | Supervised | Labeled (input+output) | Learn input→output mapping | Spam detection | | Unsupervised | Unlabeled | Find patterns/structure | Customer clustering | | Reinforcement | Reward signals | Maximize cumulative reward | Game playing, robotics | | Semi-supervised | Mix of labeled + unlabeled | Reduce labeling cost | Medical imaging |
Linear Regression: Predict continuous output. y = mx + b; minimize Mean Squared Error (MSE)
Logistic Regression: Binary classification. σ(z) = 1/(1+e^-z); outputs probability 0-1
Decision Tree: Tree of if-else decisions. Split criterion: Information Gain (Entropy) or Gini Impurity
Random Forest: Ensemble of decision trees; majority voting.
SVM (Support Vector Machine): Find hyperplane that maximizes margin between classes.
k-Nearest Neighbors (kNN): Classify based on k closest training examples.
| Metric | Formula | Use | |---|---|---| | Accuracy | (TP+TN)/(TP+TN+FP+FN) | Overall correctness | | Precision | TP/(TP+FP) | When FP is costly (spam) | | Recall | TP/(TP+FN) | When FN is costly (disease) | | F1 Score | 2×P×R/(P+R) | Balance precision & recall |
Confusion Matrix:
Predicted
Pos Neg
Actual Pos [ TP | FN ]
Neg [ FP | TN ]
Overfitting vs Underfitting:
Artificial Neuron (Perceptron): Output = activation_function(Σ(weight × input) + bias)
Activation Functions:
| Function | Formula | Use | |---|---|---| | Sigmoid | 1/(1+e^-x) | Binary output | | ReLU | max(0,x) | Hidden layers (most common) | | Tanh | (e^x-e^-x)/(e^x+e^-x) | Hidden layers | | Softmax | e^xi/Σe^xj | Multi-class output |
Multilayer Perceptron (MLP):
Deep Learning: Neural networks with many hidden layers.
Q1 (2023): Apply A* to find shortest path. g(n) = actual cost, h(n) = Manhattan distance. Expand nodes in order of f = g + h. Always expand lowest f. Continue until goal node is expanded.
Q2 (2022): What is the difference between classification and regression? Classification predicts discrete categories (spam/not spam, disease/healthy). Regression predicts continuous values (house price, temperature). Different loss functions: cross-entropy for classification, MSE for regression.
Q3 (2024): Calculate precision and recall: TP=80, FP=20, FN=10, TN=90. Precision = 80/(80+20) = 80% | Recall = 80/(80+10) = 88.9%
Complete AI notes for B.Tech CS Semester 5 — search algorithms, knowledge representation, propositional logic, Bayesian networks, machine learning basics, and neural networks.
58 pages · 2.9 MB · Updated 2026-03-11
BFS explores level by level, guaranteed shortest path, more memory. DFS goes deep first, less memory, may not find shortest path, can get stuck in infinite paths.
Supervised learning uses labeled data to learn a mapping from inputs to outputs. Unsupervised learning finds patterns in unlabeled data without a target output.
Software Engineering — SDLC, Agile, UML, Testing
Software Engineering
Computer Organization and Architecture — Complete Notes
Computer Organization & Architecture
Computer Networks Complete Notes — B.Tech CS Sem 5
Computer Networks
DBMS Complete Notes — B.Tech CS Sem 4
Database Management Systems
Compiler Design — Complete Notes CS Sem 6
Compiler Design
Your feedback helps us improve notes and tutorials.