Google designed Puppeteer to provide a simple yet powerful interface in Node.js for automating tests and various tasks using the Chromium browser engine. It runs headless by default, but it can be configured to run full Chrome or Chromium.
The API build by the Puppeteer team uses the DevTools Protocol to take control of a web browser, like Chrome, and perform different tasks, like:
- Snap screenshots and generate PDFs of pages
- Automate form submission
- UI testing (clicking buttons, keyboard input, etc.)
- Scrape a SPA and generate pre-rendered content (Server-Side Rendering)
Most actions that you can do manually in the browser can also be done using Puppeteer. Furthermore, they can be automated so you can save more time and focus on other matters.
Puppeteer was also built to be developer-friendly. People familiar with other popular testing frameworks, such as Mocha, will feel right at home with Puppeteer and find an active community offering support for Puppeteer. This led to massive growth in popularity amongst the developers.
Of course, Puppeteer isn’t suitable only for testing. After all, if it can do anything a standard browser can do, then it can be extremely useful for web scrapers. Namely, it can help with executing javascript code so that the scraper can reach the page’s HTML and imitating normal user behavior by scrolling through the page or clicking on random sections.
These much-needed functionalities make headless browsers a core component for any commercial data extraction tool and all but the most simple homemade web scrapers.
First and foremost, make sure you have up-to-date versions of Node.js and Puppeteer installed on your machine. If that isn’t the case, you can follow the steps below to install all prerequisites.
You can download and install Node.js from here. Node’s default package manager npm comes preinstalled with Node.js.
To install the Puppeteer library, you can run the following command in your project root directory:
npm install puppeteer # or "yarn add puppeteer"
Note that when you install Puppeteer, it also downloads the latest version of Chromium that is guaranteed to work with the API.
Firstly we need to inspect the website that we’re scraping and find the login fields. We can do that by right-clicking on the element and choosing the Inspect option.
In my case, the inputs are inside a form with the class login-form. We can enter the login credentials using the type() method.
Also, if you want to make sure that it does the correct actions, you can add the headless parameter and set it to false when you launch the Puppeteer instance. You’ll then see how Puppeteer does the entire process for you.
const puppeteer = require('puppeteer') async function login() { try { const URL = 'https://old.reddit.com/'const browser = await puppeteer.launch({headless: false}) const page = await browser.newPage() await page.goto(URL) await page.type('.login-form input[name="user"]', 'EMAIL@gmail.com') await page.type('.login-form input[name="passwd"]', 'PASSWORD') await Promise.all([ page.click('.login-form .submit button'), page.waitForNavigation(), ]); await browser.close() } catch (error) { console.error(error) } } login()
To simulate a mouse click we can use the click() method. After we click the login button, we should wait for the page to load. We can do that with the waitForNavigation() method.
If we entered the correct credentials, we should be logged in now!
Happy Coding …