Rough Sets, Pareto Optimality, and Swarm Algorithms Explained
24(a) Meaning of Indiscernibility
Two objects are indiscernible if they have same values for a chosen set of attributes.
Forms equivalence classes (granules).
Basis for lower/upper approximations in Rough Sets.
24(b) Reduct & Core
Reduct
Minimum subset of attributes giving same classification power as full set.
Removing any attribute → information loss.
Core
Intersection of all reducts
Contains attributes must for classification.
Example (Medical dataset)
Attributes: Headache, Muscle Pain, Temperature → Decision: Flu
If removing Muscle Pain still classifies correctly → {Headache, Temperature} is a reduct.
If Headache appears in all reducts → Headache is in core.
24(c) Pareto Optimal Solution
A solution is Pareto optimal if no other solution improves one objective without worsening another.
Output:
Set of best trade-offs, not one best point.
25(a) Dominated Set
Solution A dominates B if:
✔ A is no worse in all objectives
✔ A is better in at least oneDominated set = inferior solutions → usually discarded.
25(b) ACO (Ant Colony Optimization)
Inspired by ants leaving pheromone trails.
Steps:
Construct solutions using pheromone + heuristic
Update pheromone (good paths reinforced)
Evaporation (avoid local optimum)
Used for TSP, routing, path optimization.
25(c) MOOP Example
Multiple conflicting objectives.
Example (Car Design):
Max speed
Min fuel
Min cost
Trade-offs → Pareto front.
26(a) Rough Set vs Classical Set
Classical
Crisp boundary: x ∈ A or x ∉ A.
Rough Set
Lower approx = definitely in
Upper approx = possibly in
Boundary region = uncertainty.
26(b) MOOP Mathematical Form
Min/Max: F(x) = [f₁(x), f₂(x), …, fₖ(x)]
Subject to:
gⱼ(x) ≤ 0
hₗ(x) = 0
Lᵢ ≤ xᵢ ≤ Uᵢ
k ≥ 2 objectives.
26(c) Indiscernibility Relation
IND(B): objects identical on attributes B.
Creates equivalence classes.
Basis for approximations and roughness.
27(a) Decision Table
Contains:
U:
ObjectsC:
Condition attributesD:
Decision attributes
Used for classification & reduct computation.
27(b) MOOP vs Single-Objective
Single objective:
One optimum
Total ordering
MOOP:
Many conflicting objectives
Pareto set
Partial ordering
27(c) Advantages of MOGA
Finds entire Pareto set in one run
Handles non-convex fronts
No need for weights
Maintains diversity
(a) Basic Working Steps of MOGA
Initialize population
Evaluate objectives
Non-dominated sorting
Assign fitness (crowding distance)
Selection
Crossover + mutation
Elitism
Repeat till convergence
(b) Non-Dominated Set vs Pareto Front
Non-Dominated Set:
solutions in decision space not dominatedPareto Front:
their objective values in objective space
(c) Lower vs Upper Approximation
Lower:
granules fully inside the set (definitely in)Upper:
granules intersect the set (possibly in)Boundary = Upper – Lower
29(a) Decision Table → Decision Matrix
Matrix M(i, j) = set of attributes where objects i and j differ
Used to find discernibility and reducts
29(b) PSO (Particle Swarm Optimization)
Particle updates velocity using:
Inertia (current motion)
Personal best (pbest)
Global best (gbest)
Velocity update:
v = wv + c₁rand(pbest − x) + c₂rand(gbest − x)
30(a) Crisp vs Fuzzy vs Rough
| Feature | Crisp | Fuzzy | Rough |
|---|---|---|---|
| Boundary | Sharp | Gradual | Granular |
| Membership | μ(x) ∈ {0,1} | μ(x) ∈ [0,1] | Approximations |
| Uncertainty | None | Vagueness | Indiscernibility |
30(b) Reduct & Core
Reduct = minimal attribute subset for same classification
Core = attributes in all reducts
30(c) Applications of MOOP
Engineering design
Portfolio optimization
Supply chain optimization
✅ QUESTION 31(c)
Rough Sets in Feature Selection
Compute reducts → smallest attribute sets maintaining classification
Core + reducts → remove irrelevant features
Helps in dimensionality reduction
