Google designed Puppeteer to provide a simple yet powerful interface in Node.js for automating tests and various tasks using the Chromium browser engine. It runs headless by default, but it can be configured to run full Chrome or Chromium.
The API build by the Puppeteer team uses the DevTools Protocol to take control of a web browser, like Chrome, and perform different tasks, like:
- Snap screenshots and generate PDFs of pages
- Automate form submission
- UI testing (clicking buttons, keyboard input, etc.)
- Scrape a SPA and generate pre-rendered content (Server-Side Rendering)
Most actions that you can do manually in the browser can also be done using Puppeteer. Furthermore, they can be automated so you can save more time and focus on other matters.
Puppeteer was also built to be developer-friendly. People familiar with other popular testing frameworks, such as Mocha, will feel right at home with Puppeteer and find an active community offering support for Puppeteer. This led to massive growth in popularity amongst the developers.
Of course, Puppeteer isn’t suitable only for testing. After all, if it can do anything a standard browser can do, then it can be extremely useful for web scrapers. Namely, it can help with executing javascript code so that the scraper can reach the page’s HTML and imitating normal user behavior by scrolling through the page or clicking on random sections.
These much-needed functionalities make headless browsers a core component for any commercial data extraction tool and all but the most simple homemade web scrapers.
First and foremost, make sure you have up-to-date versions of Node.js and Puppeteer installed on your machine. If that isn’t the case, you can follow the steps below to install all prerequisites.
You can download and install Node.js from here. Node’s default package manager npm comes preinstalled with Node.js.
To install the Puppeteer library, you can run the following command in your project root directory:
npm install puppeteer # or "yarn add puppeteer"
Note that when you install Puppeteer, it also downloads the latest version of Chromium which is guaranteed to work with the API.
w will use the /r/learnprogramming subreddit for this article. So we want to navigate to the website, and grab the title and URL for every post. We’ll use the evaluate() method for that.
The code should look like this:
const puppeteer = require('puppeteer') async function tutorial() { try { const URL = 'https://old.reddit.com/r/learnprogramming/'const browser = await puppeteer.launch() const page = await browser.newPage() await page.goto(URL) let data = await page.evaluate(() => { let results = [] let items = document.querySelectorAll('.thing') items.forEach((item) => { results.push({ url: item.getAttribute('data-url'), title: item.querySelector('.title').innerText, }) }) return results }) console.log(data) await browser.close() } catch (error) { console.error(error) } } tutorial()
Using the Inspect method presented earlier, we can grab all the posts by targeting the .thing selector. We iterate through them, and for each one, we get the URL and the title and push them into an array.
After the entire process is completed, you can see the result in your console.
Great, we scraped the first page. But how do we scrape multiple pages of this subreddit?
It’s simpler than you think. Here’s the code:
const puppeteer = require('puppeteer') async function tutorial() { try { const URL = 'https://old.reddit.com/r/learnprogramming/'const browser = await puppeteer.launch({headless: false}) const page = await browser.newPage() await page.goto(URL) let pagesToScrape = 5; let currentPage = 1; let data = [] while (currentPage <= pagesToScrape) { let newResults = await page.evaluate(() => { let results = [] let items = document.querySelectorAll('.thing') items.forEach((item) => { results.push({ url: item.getAttribute('data-url'), text: item.querySelector('.title').innerText, }) }) return results }) data = data.concat(newResults) if (currentPage < pagesToScrape) { await page.click('.next-button a') await page.waitForSelector('.thing') await page.waitForSelector('.next-button a') } currentPage++; } console.log(data) await browser.close() } catch (error) { console.error(error) } } tutorial()
We need a variable to know how many pages we want to scrape and another variable for the current page. While the current page is less than or equal to the number of pages that we want to scrape, we grab the URL and title for each post on the page. After each page is harvested, we concatenate the new results with the ones already scraped.
Then we click the next page button and repeat the scraping process until we reach the desired number of extracted pages. We also need to increment the current page after each page.
Happy Coding …