Analytical Chemistry Fundamentals and Error Management
Accuracy and Precision in Measurement
Accuracy: It is the closeness or correctness of the measured value to the standard or true value. Closeness or correctness can be determined by a single measurement. Accuracy is expressed in terms of either absolute error or relative error.
Precision: It is defined as the repeated value of measurement or the closeness of multiple observations to each other. To determine precision, several measurements are required. Precision is expressed in standard deviation and coefficient of variation.
Qualitative and Quantitative Analysis
Qualitative Analysis
This identifies the presence of chemical substances based on physical and chemical properties. It includes:
- Color tests and flame tests
- Precipitation reactions
- Drug identification and adulterant detection
These methods are usually basic or simple.
Quantitative Analysis
This determines the amount or concentration of a substance using numerical values and calculations. It includes:
- Titration and Gravimetry
- Spectrophotometry and HPLC
- Assay dosage formulation and standardization
These methods often involve advanced and analytical instruments.
Concentration Units: Molarity and Normality
Molarity: It is defined as the number of moles of solute dissolved in 1 liter of solution. It is also known as molar concentration and is denoted by ‘M’.
Normality: It is defined as the number of gram equivalents per volume of solution. It is also known as normal concentration and is denoted by ‘N’.
Acid-Base Theories and Indicators
According to the Bronsted-Lowry Theory: An acid is a substance capable of yielding a proton, while a base is a substance capable of accepting a proton.
An Indicator: This is a substance used to determine the end point in a titration. Examples include:
- Phenolphthalein
- Methyl orange
- Phenol red
Acidimetry and Alkalimetry
1) Acidimetry
Acidimetry is used to determine the concentration of a basic substance using a standard acid solution. In acidimetry, a known volume of a base is put into a conical flask; the solution is then titrated against a standard solution of acid taken in a burette until the equivalent point is reached.
The equivalent point is the point at which the number of moles of analyte (base) is equal to the number of moles of titrant (acid).
- Titrant: Acid (used in the burette)
- Analyte: Base (taken in the flask)
- In acidimetry, the acid is the standard solution.
2) Alkalimetry
Alkalimetry is used to determine the concentration of an acidic substance using a standard base. In alkalimetry, a known volume of an acid is put into a conical flask and the base is taken in a burette. The equivalent point is the point at which the number of moles of analyte (acid) is equal to the number of moles of titrant (base).
- Analyte: Acid (taken in the flask)
- Titrant: Base (taken in the burette)
- In alkalimetry, the base is the standard solution.
Solvents Used in Non-Aqueous Titration
There are basically four types of solvents used in non-aqueous titration:
- Protophilic Solvents: The word “protophilic” stands for “proton lover.” These solvents are basic in nature and are used to dissolve acidic analytes. They possess a high affinity for protons (e.g., pyridine, amines).
- Protogenic Solvents: The word “protogenic” stands for “proton generator.” These solvents are acidic in nature and can donate protons. They are used to dissolve basic analytes and have high dielectric constants (e.g., glacial acetic acid, formic acid).
- Amphoteric Solvents: These work as both protophilic and protogenic solvents. These solvents behave as both acids and bases, meaning they can either accept or donate a proton (e.g., alcohol, methanol, ethanol).
- Aprotic Solvents: These solvents are chemically inert. They are neither acidic nor basic and do not accept or donate protons. They have low dielectric constants (e.g., benzene, chloroform).
Understanding Errors in Chemical Analysis
Error is defined as the difference between the standard value and the observed value.
ERROR = STANDARD VALUE – OBSERVED VALUE
Types of Errors
- Systematic Error: These occur during analysis by the analyst due to wrong procedures or instruments.
- Personal Error: Occurs due to personal mistakes, carelessness, or a lack of knowledge by the analyst.
- Instrumental Error: Occurs due to a defective instrument.
- Methodic Error: Occurs when the analyst chooses the wrong method.
- Reagent Error: Occurs due to impurities in the reagents.
- Random Error: These occur randomly and are unpredictable and difficult to identify. The analyst has no control over these types of errors, so elimination and prevention may not be possible (e.g., errors due to temperature and humidity).
- Gross Error: Significant mistakes caused by human carelessness or poor experimental techniques, which can produce outlier results that differ dramatically from other data.
Methods for Minimizing Analytical Errors
Errors can be minimized by the following methods:
- Calibration of Instruments/Apparatus: Calibration is the process by which we check the correctness of an instrument using standard readings and values. By using calibration, we minimize determinate errors occurring due to instruments or apparatus (e.g., glassware).
- Blank Determination: In this method, analysis is performed with and without the sample to identify and minimize impurities in reagents and solvents.
- Control Determination: In this method, a standard solution is used for analysis and compared with the normal determination.
- Independent Method: We perform the analysis of a substance using two or more different methods and then compare them to find and minimize errors.
- Parallel Determination: We perform the analysis of a substance more than two times and then compare them to find and minimize errors (e.g., testing a substance three times using the same method: 1—2—3).
