Web Scraping with Python: Techniques & Libraries

Web Scraping with Python

Introduction

This document explores various techniques and libraries for web scraping using Python. We’ll delve into popular tools like Selenium, Beautiful Soup, and Requests, and touch upon data visualization libraries like Matplotlib and Seaborn.

Selenium for Dynamic Websites

Selenium is a powerful library for automating web browser interactions. It’s particularly useful for scraping dynamic websites that rely on JavaScript.

from selenium import webdriver
import time
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.maximize_window()
time.sleep(3)
driver.get('https://www.confirmtkt.com/pnr-status')
pnr_field = driver.find_element("name", "pnr")
submit_button = driver.find_element(By.CSS_SELECTOR, '.col-xs-4')
pnr_field.send_keys('4358851774')
submit_button.click()
welcome_message = driver.find_element(By.CSS_SELECTOR,".pnr-card")
print(type(welcome_message))
html_content = welcome_message.get_attribute('outerHTML')
print("HTML Content:", html_content)
driver.quit()

Beautiful Soup for Parsing HTML

Beautiful Soup is a library for parsing HTML and XML documents. It provides convenient methods for extracting data from web pages.

from bs4 import BeautifulSoup
html = request.get(("web.html"))
soup = BeautifulSoup(html, 'html.parser')
heading = soup.select('h1')
print("1. Heading:", heading[0].text)
paragraph = soup.select('.paragraph')
print("2. Paragraph:", paragraph[0].text)
div_content = soup.select('#content')
print("3. Div Content:", div_content[0].text)
link = soup.select('a[href="https://example.com"]')
print("4. Link:", link[0]['href'])
list_items = soup.select('ul li')
print("5. List Items:")
for item in list_items:
    print("-", item.text)

NumPy Broadcasting

NumPy’s broadcasting feature allows for efficient element-wise operations on arrays of different shapes and sizes.

Key Concepts:

  • Shape Compatibility
  • Rules of Broadcasting
  • Automatic Replication

Requests for HTTP Requests

The Requests library simplifies making HTTP requests in Python.

import requests
url = "https://example.com"
response = requests.get(url)
if response.status_code == 200:
    html_content = response.text
else:
    print(f"Failed to fetch the webpage. Status code: {response.status_code}")

Extracting Links with BeautifulSoup

from urllib.request import urlopen
from bs4 import BeautifulSoup
url = input('Enter the url')
html = urlopen(url).read()
soup = BeautifulSoup(html, "html.parser")
tags = soup('a')
for tag in tags:
    print('TAG:', tag)
    print('URL:', tag.get('href', None))
    print('Contents:', tag.contents[0])
    print('Attrs:', tag.attrs)

Data Visualization with Matplotlib

Matplotlib is a versatile library for creating static, animated, and interactive visualizations in Python.

Benefits:

  • Simple and intuitive plotting
  • High performance and professional output
  • Seamless integration with NumPy and SciPy
  • Extensive customization options

Statistical Visualization with Seaborn

Seaborn is a statistical data visualization library built on top of Matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.

Key Features:

  • Variety of visualization patterns
  • Concise syntax and appealing default themes
  • Specialization in statistical visualization
  • Integration with Pandas DataFrames

Conclusion

Python offers a rich ecosystem of libraries for web scraping and data visualization. By combining these tools effectively, you can extract valuable insights from web data and present them in compelling visual formats.