Rough Sets, Pareto Optimality, and Swarm Algorithms Explained

24(a) Meaning of Indiscernibility

  • Two objects are indiscernible if they have same values for a chosen set of attributes.

  • Forms equivalence classes (granules).

  • Basis for lower/upper approximations in Rough Sets.


24(b) Reduct & Core

Reduct

  • Minimum subset of attributes giving same classification power as full set.

  • Removing any attribute → information loss.

Core

  • Intersection of all reducts


  • Contains attributes must for classification.

Example (Medical dataset)


  • Attributes: Headache, Muscle Pain, Temperature → Decision: Flu

  • If removing Muscle Pain still classifies correctly → {Headache, Temperature} is a reduct.

  • If Headache appears in all reducts → Headache is in core.


24(c) Pareto Optimal Solution

  • A solution is Pareto optimal if no other solution improves one objective without worsening another.

  • Output:
    Set of best trade-offs, not one best point.



25(a) Dominated Set

  • Solution A dominates B if:
    ✔ A is no worse in all objectives
    ✔ A is better in at least one

  • Dominated set = inferior solutions → usually discarded.


25(b) ACO (Ant Colony Optimization)

  • Inspired by ants leaving pheromone trails.

  • Steps:

    1. Construct solutions using pheromone + heuristic

    2. Update pheromone (good paths reinforced)

    3. Evaporation (avoid local optimum)

  • Used for TSP, routing, path optimization.


25(c) MOOP Example

  • Multiple conflicting objectives.

  • Example (Car Design):

    • Max speed

    • Min fuel

    • Min cost

  • Trade-offs → Pareto front.



26(a) Rough Set vs Classical Set

Classical

  • Crisp boundary: x ∈ A or x ∉ A.

Rough Set

  • Lower approx = definitely in

  • Upper approx = possibly in

  • Boundary region = uncertainty.


26(b) MOOP Mathematical Form

Min/Max: F(x) = [f₁(x), f₂(x), …, fₖ(x)]
Subject to:

  • gⱼ(x) ≤ 0

  • hₗ(x) = 0

  • Lᵢ ≤ xᵢ ≤ Uᵢ

  • k ≥ 2 objectives.


26(c) Indiscernibility Relation

  • IND(B): objects identical on attributes B.

  • Creates equivalence classes.

  • Basis for approximations and roughness.

27(a) Decision Table

Contains:

  • U:


    Objects
  • C:


    Condition attributes
  • D:


    Decision attributes
    Used for classification & reduct computation.


27(b) MOOP vs Single-Objective

Single objective:


  • One optimum

  • Total ordering

MOOP:


  • Many conflicting objectives

  • Pareto set

  • Partial ordering


27(c) Advantages of MOGA

  • Finds entire Pareto set in one run

  • Handles non-convex fronts

  • No need for weights

  • Maintains diversity

(a) Basic Working Steps of MOGA

  1. Initialize population

  2. Evaluate objectives

  3. Non-dominated sorting

  4. Assign fitness (crowding distance)

  5. Selection

  6. Crossover + mutation

  7. Elitism

  8. Repeat till convergence



(b) Non-Dominated Set vs Pareto Front

  • Non-Dominated Set:


    solutions in decision space not dominated
  • Pareto Front:


    their objective values in objective space

(c) Lower vs Upper Approximation

  • Lower:


    granules fully inside the set (definitely in)
  • Upper:


    granules intersect the set (possibly in)
  • Boundary = Upper – Lower

29(a) Decision Table → Decision Matrix

  • Matrix M(i, j) = set of attributes where objects i and j differ

  • Used to find discernibility and reducts


29(b) PSO (Particle Swarm Optimization)

Particle updates velocity using:

  • Inertia (current motion)

  • Personal best (pbest)

  • Global best (gbest)

Velocity update:
v = wv + c₁rand(pbest − x) + c₂rand(gbest − x)

30(a) Crisp vs Fuzzy vs Rough

FeatureCrispFuzzyRough
BoundarySharpGradualGranular
Membershipμ(x) ∈ {0,1}μ(x) ∈ [0,1]Approximations
UncertaintyNoneVaguenessIndiscernibility


30(b) Reduct & Core

  • Reduct = minimal attribute subset for same classification

  • Core = attributes in all reducts


30(c) Applications of MOOP

  • Engineering design

  • Portfolio optimization

  • Supply chain optimization


QUESTION 31(c)

Rough Sets in Feature Selection

  • Compute reducts → smallest attribute sets maintaining classification

  • Core + reducts → remove irrelevant features

  • Helps in dimensionality reduction