Automatic resumes of your R-developer portfolio from your R-Universe

[This article was first published on Valerio Gherardi, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Hi R-bloggers ?

Starting from today, all posts from this blog in the R category will also appear on R-bloggers. I would like to thank Tal for aggregating my blog, and say “hi!” to all R-bloggers readers. I’m a particle physicist with a passion for R, Statistics and Machine Learning. If you want to find out something more about me, you can take a look at my website, and links therein.

Introduction

R-universe is a cool initiative from rOpenSci, which allows you to create your own CRAN-like repository. The latter is synced with the GitHub repositories (main or specific branches, or releases) associated to your R packages, so that using an R-universe is a very effortless way to organize and share your personal package ecosystem.

If you want to setup your own R-universe, follow the instructions in this blog post. In this post, I assume that you have created your own R-universe, and show you how to retrieve metadata on your packages using the R-universe API.

Retrieving packages descriptions from your R-universe API

Once you will have it set up, your R-universe will be available at the URL your-user-name.r-universe.dev. For instance, mine is vgherard.r-universe.dev. From your R-universe home page, you can access the documentation of the API. We will use the command:

GET /stats/descriptions
    NDJSON stream with data from package DESCRIPTION files.

The JSON stream can be read with jsonlite, as follows:

con <- url("https://vgherard.r-universe.dev/stats/descriptions")
pkgs <- jsonlite::stream_in(con)

 Found 6 records...
 Imported 6 records. Simplifying...

The result is a dataframe with alll the entries of your packages’ DESCRIPTION file, e.g.:

pkgs[, c("Package", "Title", "Version")]
   Package                                             Title
1      r2r                    R-Object to R-Object Hash Maps
2   kgrams                  Classical k-gram Language Models
3 scribblr                          A Notepad Inside RStudio
4  gsample   Efficient Weighted Sampling Without Replacement
5      sbo Text Prediction via Stupid Back-Off N-Gram Models
6     fcci              Feldman-Cousins Confidence Intervals
     Version
1 0.1.1.9000
2      0.1.0
3 0.2.0.9000
4      0.1.0
5      0.5.0
6      1.0.0

I use this query on my personal website to automatically generate a resume of the packages available on my R-universe (this is combined with a GitHub Action scheduled workflow which periodically updates the Code section of my website). More precisely, I define an R string txt containing the Markdown code for my resume, and I inline it in R Markdown using the synthax `r `. This is the code I use on my website:

txt <- ""
for (i in seq_len(nrow(pkgs))) {
    txt <- paste0(
        txt, 
        "### [`", pkgs[i, "Package"], "`](", pkgs[i, "RemoteUrl"], ")", "\n",
        "[![CRAN status](https://www.r-pkg.org/badges/version/", pkgs[i,"Package"],
        ")](https://CRAN.R-project.org/package=",pkgs[i, "Package"], ")",
        "\n\n",
        "*", pkgs[i, "Title"], ".* ", pkgs[i, "Description"],
        "\n\n"
        )
}

and this is the output:

r2r

CRAN status

R-Object to R-Object Hash Maps. Implementation of hash tables (hash sets and hash maps) in R, featuring arbitrary R objects as keys, arbitrary hash and key-comparison functions, and customizable behaviour upon queries of missing keys.

kgrams

CRAN status

Classical k-gram Language Models. Tools for training and evaluating k-gram language models in R, supporting several probability smoothing techniques, perplexity computations, random text generation and more.

scribblr

CRAN status

A Notepad Inside RStudio. A project aware notepad inside RStudio, for taking quick project-related notes without distractions. RStudio addin.

gsample

CRAN status

Efficient Weighted Sampling Without Replacement. Sample without replacement using the Gumbel-Max trick (c.f. ).

sbo

CRAN status

Text Prediction via Stupid Back-Off N-Gram Models. Utilities for training and evaluating text predictors based on Stupid Back-Off N-gram models (Brants et al., 2007, https://www.aclweb.org/anthology/D07-1090/).

fcci

CRAN status

Feldman-Cousins Confidence Intervals. Provides support for building Feldman-Cousins confidence intervals [G. J. Feldman and R. D. Cousins (1998) doi:10.1103/PhysRevD.57.3873].

To leave a comment for the author, please follow the link and comment on their blog: Valerio Gherardi.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)