8085 Microprocessor Architecture & Assembly Language Fundamentals

Bus Organization in 8085 Microprocessors

A bus in the 8085 microprocessor is a group of wires used for communication between different components. There are three main types:

  • Data Bus: Carries actual data, like a delivery van.
  • Address Bus: Carries the memory address to access data, like a GPS.
  • Control Bus: Carries control signals (e.g., read/write instructions). These signals coordinate data movement between the CPU, memory, and I/O devices.

Memory Addressing & Mapping Fundamentals

Memory addressing

Read More

NLP Foundations: From Text Processing to Large Language Models

Week 1: Working with Words

  • Tokenization:


    Splitting text into discrete units (tokens), typically words or punctuation . Techniques vary (simple split on whitespace vs. Advanced tokenizers); challenges include handling punctuation, contractions, multi-
    word names, and different languages (e.G., Chinese has no spaces). Good tokenization is foundational for all NLP tasks.
  • Bag-of-Words (BoW):


    Representing a document by the counts of each word in a predefined vocabulary, ignoring order . The vocabulary is
Read More

Cybersecurity Essentials: Concepts & Best Practices

Vulnerability & Patch Management

(Domain 4 Concepts)

Common Vulnerability Scoring System (CVSS)

  • Rates vulnerabilities on a scale of 0-10 for severity.

Exploit vs. Zero-Day

  • Exploit: A known attack method against a vulnerability.
  • Zero-Day: A vulnerability that has no patch available yet.

Remediation vs. Mitigation

  • Remediation: Completely fixing a vulnerability.
  • Mitigation: Applying temporary protections while awaiting a full fix.

Common Scanning Tools

  • Nessus / OpenVAS: Identify known vulnerabilities.
  • Nikto:
Read More

Quantitative Methods & Machine Learning Essentials

Likelihood Function

The likelihood function describes how observed data depends on the model parameters, θ. It is often denoted as p(x|θ).

Maximum Likelihood Estimation (MLE)

The Maximum Likelihood Estimator (MLE), δ(x), is defined as:

AD_4nXens3ikERhLz2M8HRO4dfpKx9_w9BgUbx73ALHG41CGRjsBxhIsjOw4BzE1BphWCTk53xzdyxIzHr1I422n2rEii-8h4nn-e_ANVu9Lfvd-9gTgwuMjXiTVHu3h94h_Pp-zzdZm?key=LHRE6LkDAk3aauFWnXAY3w

or equivalently:

AD_4nXcBOqTKdnd_44nT9SvCEN_DC3fTsTRA6L92Ma8KuAwJFiqJw1ob3yLLhFBxcIFmvg5br9WC97OQbtxEoadi3MJZmlNWDP--HzQmzGYYOHlE92JvAyaLrwZrPv1hOvQE0-1bhjRPHw?key=LHRE6LkDAk3aauFWnXAY3w

The Gaussian log-likelihood is shown above.

Asymptotic Distribution of MLE

When n is large, the asymptotic distribution of the MLE is given by:

AD_4nXcfhoYKSvG5j_M-BufAmuQPVt-n61jGX9AFGHH72am8FJwK1jJwpVhCbd18IozHsGDa5vglE6F4FYN9cMiEDAVGKnnwWcmgNHuaykwujgCqRVwcUUjRoQhM2WOLFu1X5v91vhpY8g?key=LHRE6LkDAk3aauFWnXAY3w

where:

AD_4nXfMjcWPVzPNV0KN67nEqo0BJIubRYV7FNxAznDQ2qrpYrVa_i36aynPjaomQ_CdQ1IZCCPIVU7aX2iaVZXGbVsUBaDXxn1wfyL61ohsq4xQrN0ndXHz4z62whJP9c9NWJ44fj8o?key=LHRE6LkDAk3aauFWnXAY3w

Examples include:

  • Gaussian: AD_4nXcrVSk61EBxOo-Yfcy7NovkHFiXR9vRQ1oDMDZJ3PDU1456xscUf4UlFihY50DKAC01kl1PrBCg8QvDmEYTZP7OGzMwky8IW5kWMc5-UBMyy14DWAuL8TqaPuoFOkeG2gWp4UrM?key=LHRE6LkDAk3aauFWnXAY3w
  • Exponential: AD_4nXdH3ZJbiU6DQ5fU66ztOanYvjMdHipMqQE66STpcJ_EYuBzRVH8grFgu9sedLQyeyMJmQZnD7hJ4i4SMSNcovoOpwfxK5RMHGPMLsJU-r_CtmogiS2jFbspG7UWBymWywXcp607?key=LHRE6LkDAk3aauFWnXAY3w

Bayesian Estimation

The Bayesian forecast is given by:

AD_4nXcQACF6aLi-R_oXFCxI00TGSzp-SYj3jT4-te79_CYDkkQzjHqVCY84UHTLI03s8qaHtVH0-f6QJk3J2FkjzwpsQlsphLkRnAtWxyBhAwwowcACKqZ2xphX_gASIN2w4wWDn5_MSw?key=LHRE6LkDAk3aauFWnXAY3w

Assume the

Read More

Probabilistic Reasoning & AI Decision Making

Probabilistic Inference

  1. What Is Probability?

    • A measure P assigning each event A a number in [0,1].

    • Axioms of Probability

      1. P(∅) = 0, P(Ω) = 1

      2. For disjoint A, B, P(A∪B) = P(A) + P(B)

    • Conditional Probability: P(A|B) = P(A∧B) / P(B)

  2. Bayesian Inference Fundamentals

    • Bayes’ Rule

      P(H|E) = [P(E|H) * P(H)] / P(E)

    • Key Terms in Bayes’ Rule

      • Prior P(H): initial belief

      • Likelihood P(E|H)

      • Posterior P(H|E): updated belief

  3. Bayesian Networks (BNs)

    • Structure: Directed acyclic graph; nodes = variables; edges = dependencies.

Read More

Essential Machine Learning Concepts Explained

Regularization Techniques in Machine Learning

Regularization is a set of techniques used to reduce overfitting by encouraging simpler models. It works by adding a penalty term to the loss function that discourages overly large or complex weights. The two most common forms are L2 (Ridge) and L1 (Lasso) regularization.

L2 Regularization (Ridge / Weight Decay)

L2 Regularization adds the squared L2 norm of the weights to the loss: L(w) = original loss + λ‖w‖². This shrinks weights proportionally

Read More