Key Molecular Biology Techniques: BACs, Autoradiography, Northern & Western Blotting
Bacterial Artificial Chromosomes (BACs)
A Bacterial Artificial Chromosome (BAC) is an engineered DNA molecule used to clone DNA sequences in bacterial cells (e.g., E. coli). BACs are often used in connection with DNA sequencing. Segments of an organism’s DNA, ranging from 100,000 to about 300,000 base pairs (bp), can be inserted into BACs. The BACs, with their inserted DNA, are then taken up by bacterial cells. As the bacterial cells grow and divide, they amplify the BAC DNA, which can then be isolated and used in sequencing DNA.
A large piece of DNA can be engineered to allow it to be propagated as a circular artificial chromosome in bacteria—a so-called bacterial artificial chromosome, or BAC. Each BAC is a DNA clone containing roughly 100,000 to 300,000 base pairs of cloned DNA. Because the BAC is much smaller than the endogenous bacterial chromosome, it is straightforward to purify the BAC DNA away from the rest of the bacterial cell’s DNA, thus having the cloned DNA in a purified form. This and other powerful features of BACs have made them extremely useful for mapping and sequencing mammalian genomes.
Autoradiography Explained
Autoradiography creates an image of a radioactive source through the direct exposure of imaging media (Parsons-Davis, 2018); the sample and the medium must be in fairly close proximity to one another. Traditional autoradiography used film emulsions, with silver halide film being the most common. The resolution of this imaging system strongly depends on the thickness of the emulsion, the size of the silver halide grains dispersed in the emulsion, the optical densitometer performance, and the strength of the development process, but it is typically around 10 µm. The high sensitivity, accuracy, efficiency, and convenience of phosphor imaging are now commonly applied to autoradiography. In phosphor imaging systems, barium fluorohalide crystals doped with a europium activator serve as the photosensitive grain, and the image is formed via photostimulated luminescence. Incident radiation produces the latent image, which is subsequently “read out” through stimulation by laser light and detected by a photomultiplier tube. As with film, resolution is a function of emulsion thickness, grain size, the optical readout system, and the properties of the sample being imaged. Because of the thinness of the medium, autoradiography is not sensitive to gamma radiation. Depending on the use of absorbers, autoradiography can be sensitive to alpha particles, beta particles, or both, but the resolution is typically around 50 µm.
General Principle of Autoradiography
The principle of autoradiographic imaging is the precipitation of silver (Ag) atoms, resulting from the ionization of a silver halide (AgX – silver bromide, chloride, iodide, or fluoride – AgBr, AgCl, AgI, or AgF, respectively) by radiolabeled samples. AgX is a light-sensitive compound commonly used in photography, generally suspended in a gelatin photographic emulsion. Each AgX molecule is individually encapsulated in the gelatin and functions as an independent detector of radioactive decay from the radiolabeled sample. Once radioactive particles hit the gelatin emulsion, AgX is reduced, resulting in the production of insoluble silver crystals.
Gelatin photographic emulsions are used to coat photographic and X-ray films, which are made of a flexible base (usually cellulose acetate). When a radiolabeled sample is in contact with a coated X-ray film (exposure), it generates a latent (hidden) image corresponding to the radioactivity distribution within the sample. To make the image visible, the exposed photographic / X-ray film must be submerged in a developing reagent, a chemical mixture that converts the silver crystals into metallic silver, darkening the gelatin emulsion. Silver nitrate (AgNO3) is highly efficient in reducing AgX molecules and is usually a component of developer solutions. The reaction is then stopped by a fixative reagent, which removes the excess AgX from the photographic / X-ray film. Highly radioactive areas (e.g., areas with a higher concentration of a radiolabeled drug or higher metabolic activity) reduce more AgX molecules, resulting in higher optical density in the film (darker areas) (Figures 1 and 2) (Cagampang, Piggins, Sheward, Harmar, & Coen, 1998; Klein et al., 2016; Srinivasan, Krebs, & RajBhandary, 2006). Thus, autoradiography should be avoided in samples that are homogeneously labeled. Although it can be quantitative, autoradiography can be a slow process, depending on the half-life of the radioisotopes used.
Northern Blotting Technique
Northern blot is a technique based on the principle of blotting for the analysis of specific RNA in a complex mixture. The technique is a modified version of Southern Blotting, which was developed for the analysis of DNA sequences. The detection of certain sequences of nucleic acids extracted from different types of biological samples is essential in molecular biology, making blotting techniques imperative in the field. The principle is identical to Southern blotting except for the probes used for detection, as Northern blotting detects RNA sequences. This technique provides information about the length of the RNA sequences and the presence of variations in the sequence. Although the technique primarily focuses on identifying RNA sequences, it has also been used for their quantification. Since its discovery, several modifications have been made for the analysis of mRNAs, pre-mRNAs, and short RNAs. Northern blotting was employed as the primary technique for RNA fragment analysis for a long time; however, newer, more convenient, and cost-effective techniques like RT-PCR have slowly replaced it.
Principle of Northern Blot
The principle of the Northern blot is the same as all other blotting techniques, based on the transfer of biomolecules from one membrane to another. The key steps involve:
- Separation: RNA samples are separated on gels according to their size by gel electrophoresis. Since RNAs are single-stranded, they can form secondary structures through intermolecular base pairing. Therefore, electrophoretic separation is performed under denaturing conditions.
- Transfer: The separated RNA fragments are then transferred to a nylon membrane. Nitrocellulose membrane is not used as RNA does not bind effectively to it.
- Immobilization: The transferred segments are immobilized onto the membrane using fixing agents.
- Detection: The RNA fragments on the membrane are detected by adding a labeled probe complementary to the RNA sequences present on the membrane.
Hybridization forms the basis of RNA detection, as the specificity of hybridization between the probe and the RNA allows for accurate identification of the segments. Northern blot utilizes size-dependent separation of RNA segments and thus can be used to determine the sizes of the transcripts.
Western Blotting (Immunoblotting)
Western blotting (immunoblotting) is a powerful and commonly used technique capable of detecting or semiquantifying an individual protein from complex mixtures of proteins extracted from cells or tissues. The history surrounding the origin of Western blotting, the theory behind the technique, a comprehensive protocol, and its uses are presented here. Lesser-known and significant problems in the Western blotting field and troubleshooting common issues are highlighted and discussed. This work serves as a comprehensive primer for new Western blotting researchers and those interested in a better understanding of the technique or achieving better results.
Western blotting, also known as immunoblotting, is one of the most commonly used techniques in molecular biology and proteomics in scientific laboratories worldwide today [Citation1]. It is an analytical method used to detect and semiquantify target proteins [Citation2, Citation3]. Western blotting also allows the identification of specific amino acids in post-translationally modified (PTM) proteins that have been modified in the cell due to physiological changes in both healthy and disease states [Citation4]. Such PTMs of proteins could be in the form of:
- Phosphorylation
- Ubiquitination
- Biotinylation
- Glycosylation
- Methylation
- Acetylation
- Sumoylation
- Nitration
- Oxidation/Reduction
- Nitrosylation
- Other types [Citation5]
Although some researchers have criticized Western blots and suggested they may not be as reliable as previously assumed, it remains an efficient and powerful technique that can be utilized to identify proteins and accurately quantify relative protein levels [Citation6]. One of the main arguments for Western blotting being unreliable is due to poorly characterized antibodies [Citation7]. Researchers have bought commercial protein-detection kits for a specific protein that were later found to target a different protein [Citation8]. However, using validated antibodies, following appropriate experimental procedures, and determining the linear and quantitative dynamic range for each target protein under the given experimental conditions make it possible to achieve a successful, reproducible, and semiquantitative or quantitative application of Western blotting [Citation9]. The theory of Western blot, a detailed protocol link, the most common Western blotting problems, as well as solutions, troubleshooting tips for common issues, and future perspectives are discussed.
History of Western Blot Application
The Western blot technique was first introduced over 42 years ago (in 1979). Since then, this technique has been mentioned in the abstracts, keywords, and titles of more than 400,000 PubMed-listed publications [Citation3]. This highlights the importance of the Western blot technique standing the test of time after its first use and its current use in bioscience laboratories today. It is important to understand the history behind the Western blot technique to appreciate why it was developed.
In 1807, Ferdinand Frederic Reuss, a physicist at Moscow State University, found that particles moved toward the positive electrode when electricity was passed through a glass tube with clay and water [Citation10]. In 1955, Oliver Smithies separated human tissue extracts using starch gel electrophoresis, which allowed genomic diversity to be estimated at the protein level [Citation11]. What is now referred to as Western blotting has more than two origins due to multiple research groups working on similar techniques [Citation3].
At least three scientific publications are central to the origin of Western blotting. George Stark’s laboratory published a method utilizing diffusion for transferring protein to membranes on July 1, 1979 [Citation12]. Stark’s laboratory publication used capillary force for the transfer, a polyacrylamide/agarose gel mix for the separation matrix, diazobenzyloxymethyl paper as the membrane, and 125-iodine (I)-labeled protein A for detection [Citation12]. Stark’s laboratory was at the forefront of blotting techniques, as in 1977 they developed an RNA blotting technique known as “Northern blotting” [Citation13]. In 1979, another work by Towbin et al. employed electrophoretic forces to transfer the proteins instead of capillary transfer. Towbin et al. used SDS-PAGE as a separation gel matrix, nitrocellulose membranes as their membrane, and primary and secondary antibodies for protein detection.