A Simple Prediction Web Service Using the New fiery Package

[This article was first published on R – rud.is, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

fiery is a new Rook/httuv-based R web server in town created by @thomasp85 that aims to fill the gap between raw http & websockets and Shiny with a flexible framework for handling requests and serving up responses.

The intent of this post is to provide a quick-start to using it setup a prediction API service.

We’ll be using the super complex model described in the first example of the predict.lm manual page and save the fitted model out so we can load it up in the web server and use it for predicting values from inputs.

set.seed(1492)
x <- rnorm(15)
y <- x + rnorm(15)
fit <- lm(y ~ x)
saveRDS(fit, "model.rds")

The code is annotated, but the gist is to:

  • Fire up the server (NOTE: it puts itself on 0.0.0.0 by default so CHANGE THIS until you’re ready for production)
  • Load the saved model
  • Setup the routing for the requests
  • Send back the model as JSON (since it’s an API vs something meant for humans)

Here’s the code (jump past it for more info):

suppressPackageStartupMessages(library(fiery))
suppressPackageStartupMessages(library(utils))
suppressPackageStartupMessages(library(jsonlite))
suppressPackageStartupMessages(library(shiny))

app <- Fire$new()

# This is absolutely necessary unless you're deliberately trying
# to expose the service to the entire network you are on which
# you probably don't want to do until in test / stage / prod

app$host <- "127.0.0.1"
app$port <- 9123 # completely arbitrary selection, make it whatevs

model <- NULL

# When the app starts, we'll load the model we saved. This
# particular one is just the first example on ?predict.lm.
# This doesn't have to be global, per se, but this makes
# for a quick example of how to setup an model API server

app$on("start", function(server, ...) {
  message(sprintf("Running on %s:%s", app$host, app$port))
  model <<- readRDS("model.rds")
  message("Model loaded")
})

# when the request comes in, route it properly. this will
# be *much* nicer with Thomas' `routr` plugin, but you can
# get up and running now this way until it's fully documented
# and on CRAN.
#
# 3 routes:
#
# if "/" then return an empty HTML page
# if "/info" give some data about the server (just for example purposes)
# if "/predict?val=##" run the value through the model
#
# No error checking or anything as this is (again) a simple
# example

app$on('request', function(server, id, request, ...) {

  response <- list(
    status = 200L,
    headers = list('Content-Type'='text/html'),
    body = ""
  )

  # this helps us see what the path is
  path <- get("PATH_INFO", envir=request)
  if (path == "/info") {

    # Build a list of all the request headers so we can 
    # regurgitate them

    out <- sapply(grep("^[A-Z_0-9]+", names(request), value=TRUE), function(x) {
      sprintf("%s: %s", x, get(x, envir=request))
    })
    out <- paste0(out, collapse="\n")

    response$body <- sprintf("<pre>Connection Id: %s\n\n%s
", id, out) } else if (grepl("^/predict", path)) { # this gets the query string; we're expecting val=## # but aren't going to do any error checking here. # You also would want to ensure there is nothing # malicious in the query string. query <- get("QUERY_STRING", envir=request) # handy helper function from the Shiny folks input <- shiny::parseQueryString(query) message(sprintf("Input: %s", input$val)) # run the prediction and add the input var value to the list res <- predict(model, data.frame(x=as.numeric(input$val)), se.fit = TRUE) res$INPUT <- input$val # we want to return JSON response$headers <- list("Content-Type"="application/json") response$body <- jsonlite::toJSON(res, auto_unbox=TRUE, pretty=TRUE) } response }) # don't fire off a browser call app$ignite(showcase=FALSE)

Assuming you’ve saved that as modelserver.r, you can fire that up in R/RStudio-proper or on the command-line with Rscript modelserver.r (also assuming the fitted model RDS file is in the same directory which is prbly not a good idea for production as well).

You can either enter something like http://127.0.0.1:9123/predict?val=-1.5 into your browser to see the JSON result there ore use cURL:

$ curl http://127.0.0.1:9123/predict?val=-1.5
{
  "fit": -0.8545,
  "se.fit": 0.5116,
  "df": 13,
  "residual.scale": 1.1088,
  "INPUT": "-1.5"
}

or even httr:

httr::content(httr::GET("http://127.0.0.1:9123/predict?val=-1.5"))
$fit
[1] -0.8545

$se.fit
[1] 0.5116

$df
[1] 13

$residual.scale
[1] 1.1088

$INPUT
[1] "-1.5"

Try hitting http://127.0.0.1:9123/ and http://127.0.0.1:9123/info in similar ways to see what you get.

Keep a watchful eye on routr as it will make setting up API servers in R much easier than this. So far I’m finding fiery a nice middle-ground between writing raw httuv servers, abusing Shiny (since it’s really meant for UX work) or dealing with the slightly more complex opencpu package for turning R into a web request handling engine.

Ideally, one would put this behind a security-aware reverse proxy for both safety (you can add some web application firewall-ish rules) and load balancing, but for in-house/local testing, this is a super quick way to publish your models for wider use. Depending on the adoption rate of fiery, I’ll create some future posts that deal with the complexities of security and performance, along with how to put this all into something like Docker for rapid, controlled deployments.

To leave a comment for the author, please follow the link and comment on their blog: R – rud.is.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)