Using Google Lighthouse for Web Pages

[This article was first published on The Jumping Rivers Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This is part one of a three part series on Lighthouse for Shiny Apps.

  • Part 1: Using Google Lighthouse for Web Pages (This post)
  • Part 2: Analysing Shiny App start-up Times with Google Lighthouse (Coming soon)
  • Part 3: Analysing Shiny App Components with Google Lighthouse (Coming soon)

Intro

This blog post was partly inspired by Colin Fay’s talk “Destroy All Widgets” at our “Shiny In Production” conference in 2022. In that talk, Colin spoke about HTML widgets and highlighted how detrimental they can be to the speed of a Shiny app. Speaking of which, the next Shiny In Production conference is taking place on 9th and 10th of October 2024, and recordings for this year’s events are coming soon to our YouTube channel.


Join us for the next installment of our Shiny in Production conference! For more details, check out our conference website!


I wanted to see if I could measure the speed of a collection of shiny apps. To do so, I was directed to Google Lighthouse, and this blog is dedicated to the use and understanding of lighthouse before I start using it on Shiny Apps.

Google Lighthouse

Google Lighthouse is an open source tool which can be used to test webpages (or web hosted apps like Shiny apps). For a specified webpage, Lighthouse generates a report summarising several aspects of that webpage. For Shiny, the most important aspects are summarised in the “Overall Performance Score” and the “Accessibility Score”, with one of the best parts being the feedback given by the report on how you can improve.

Before you can use Lighthouse you must install it (and npm if you don’t already have it):

npm install -g lighthouse

Then to run a Google Lighthouse assessment in the command line you simply run:

lighthouse --output json --output-path data/output_file.json url

Where you specify:

  • the output format, either json and csv are available, I used json as more information is stored.
  • The output path for where you would like the data to be stored.
  • The url of the Shiny app you would like to test (the location of your deployed app or, if developing locally, the URL that Shiny prints out when the app starts: Listening on http://127.0.0.1:4780).

One cool feature of Lighthouse is that you can test apps in both desktop and mobile settings. The default is mobile but you can specify desktop by adding --preset desktop after the url argument.

When you run the command a new Chrome browser will open with the specified URL, where Lighthouse will run the report. This browser will automatically be closed by Lighthouse when it is finished. For all the Lighthouse demos in this blog I am going to use our website for consistency.

Another way to access Lighthouse is to simply use it in a Chrome browser and open the DevTools panel, as described in the Chrome Developer documentation. A Lighthouse tab should be visible in the “more tabs” section, where you can run performance checks interactively.

lighthouse in browser. The image is a screenshot of the dev tools panel with the title “Generate a Lighthouse report” and a button to “Analyze page load”, followed by some radio buttons to select different options, including whether you want to test for mobile or desktop devices - mobile is selected by default.

From DevTools all you do is tick the boxes to specify the device type and performance metrics you want to assess. Then press “Analyze page load” to start the Lighthouse report generation.

Lighthouse Output

Depending on how you’ve run the Lighthouse report, the way you access the results will be different. Firstly if you have used the terminal and saved the lighthouse output you will have a csv or json file containing the data displayed in the report (json output contains more in depth data).

Alternatively from the terminal you can add --view after the URL and the Lighthouse report will open in your browser to view it when ready. Here is an example of this:

Lighthouse report in browser. The page shows four metrics at the top and their scores out of 100. Performance: 100, Accessibility: 100, Best Practices: 92, SEO: 100, PWA: No score shown. This is followed by a list of detailed metrics and a view of the page in question.

Lastly, if you have run Lighthouse through DevTools in a Chrome browser, the report will become visible in the DevTools panel. Location aside, the report should look identical to the browser version created with the --view option. It should look similar to this:

Lighthouse report in browser. The same report as above is shown in the dev tools window split with the page in question. This time the metrics are as follows: Performance: 100, Accessibility: 96, Best Practices: 100, SEO: 100, PWA: No score shown.

You may have noticed that I have got different scores in the separate screenshots even though I am using the same URL for both. This gives me a great opportunity to bring up one of the drawbacks of Lighthouse, and that is the variability in results. For example you could run a test on our website and get a different score. There are a number of reasons for this including internet or device performance and browser extensions, so the Lighthouse developers recommend running multiple tests. This topic is covered in more detail here.

Lighthouse Performance Metrics

Lighthouse scores apps on 5 measures: Performance, Accessibility, Best Practices, SEO (search engine optimization) and PWA (progressive web app).

Here, we will look at the overall performance score. This is based on a weighted combination of several different metrics. As of Lighthouse 10 (8 was slightly different) the score is made up of:

  • 10% First Contentful Paint – This is the time from the page starting to any part of the page’s content is rendered on the screen. “Content” can be text, images, elements or non-white elements.
  • 10% Speed Index – This is how quickly the contents of a page are visibly populated.
  • 25% Largest Contentful Paint – This metric is the time between the page starting and the largest visible image or text block loading.
  • 30% Total Blocking Time – This is the time between first contentful paint and another metric called time to interactive, which measures how long the app takes to become interactive for the user.
  • 25% Cumulative Layout Shift – This is measure of the largest layout shift which occurs during the lifespan of a page, a good explanation can be found here.

Performance scores lie in a range between 0 (worst) and 100 (best).

Lighthouse Performance Suggestions

Another cool feature of Google Lighthouse is the performance improvement suggestions. I am going to use the Surfline website as an example for this section. These suggestions can be found underneath the performance score on the report and should look similar to the image below.

A report page as in the above images is shown, this time for a different webpage. The scores here are as follows: 74, 81, 83, 100. Below is a section called Opportunities, which lists ways in which the score can be improved, and estimated loading time savings.

For each suggestion you have the ability to expand for more information along with the visible estimated time savings from implementing the suggestion. These suggestions can be helpful if you want to improve a particular aspect of your website or just generally streamline it.

This was an overview of Google Lighthouse covering the many ways to run reports on web pages and some guidelines for interpreting Lighthouse reports. We can also use it to analyse Shiny applications, which will be covered in the next installment of this blog series.

For updates and revisions to this article, see the original post

To leave a comment for the author, please follow the link and comment on their blog: The Jumping Rivers Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)