Transforming E-commerce Data into SCM Value

[This article was first published on R – NYC Data Science Academy Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Introduction

The enlargement of e-commerce platform on brick and mortar retailers is one eminent illustration of the big data phenomenon in which is wiping out many traditional retail industries nowadays. There are many well known aspects and causation of Amazon’s huge success in e-commerce; however turning data into actionable insights in the supply chain can be restrained due to inability to transform transactional data into a ‘customer-centric’ value.

There are numerous opportunities within retail industries to optimize and take control of accumulated data within supply chain network.

My intention of this project is to tackle order management part of the business which is sometimes much less appreciated compared to the spotlight of the demand forecasting by many people.

List of SCM Opportunities:

The motives of this R Shiny App is to visualize:

  1. Transactional lead time of point of sales to delivery to the end-consumer
  2. Maintain backorder ratio
  3. Monitor sales run-rate

which will allow the users to optimize warehouse resource allocation in the future.

The Data

Finding a decent public sales dataset which conveys real life situation is often difficult to obtain because of its nature of confidentiality; however thankfully Olist, the largest department store in Brazilian marketplaces, has published unclassified dataset over ~100,000 orders on Kaggle.

The whole dataset is well organized and comprised of 8 tables in a form of a star schema; key paired relationships between approximately 45 features in total. The dataset is somewhat self-explainable yet naming convention of some categorical features and customer reviews in Portuguese were some troublesome factors.

Limitation

Above all, lacking dominant expertise of Brazilian market/logistics condition was an impediment for clear analysis. I have generalized my understanding to fit closely with circumstances here in United States.

Approach

Change internal data to Enterprise Resource Planning (ERP) format to hopefully enable to blend with Warehouse Management System (WMS) in Olist by creating features that can capture duration of each partition of lead time.

 

Findings and Explanation

Sales

  • Consumer spent over $7 million Brazilian dollars in year 2017 alone
  • Shows YoY growth rate of 143% (week 1-35, 2017 v 2018).
Illustrates potential of e-commerce in Brazil.

Lead Time

This is a break down latency between the initiation and execution of product delivery. Which is essential part of this analysis. Link for the picture below.

Explanation:

Black dotted lines Customer RDD (Required Delivery Date).

Definition: Estimated delivery date informed to the customer (approximately 24 days out)

Problem: This habit of having customer RDD set too far off compared to the actual delivery will often interfere for sellers to prioritize order fulfillment especially when there are stockouts and prevent to shape a good practice of demand forecasting. Just to note that forecasting should be done with KPI of actual shipment rather than POS (point of sales)

Green Bar Purchased Order to Sales Order

Definition: duration of order purchase to payment approval time.

  • It takes about 12 hours for system ETL transaction in fraud checks on both credit cards and boletos.

Blue Bar – Sales Order to Goods Issue

Definition: duration of sellers shipping to the carriers.

  • About 3 days for sellers to deliver to each carrier.
  • This is an indicator that inventory outage isn’t yet a problem for many sellers; however this portion is likely to increase as demand goes up and forecasting is done correctly.

Red Bar: Goods Issue to Goods Receipt

Definition: duration of carriers shipping to the end-consumers

  • About 9 days: for carriers to deliver the goods to the customers.

Problem: This is an atrocious result, in which holds up 73% proportion of the whole delivery process. Carrier should not hold onto products for this long.

This indicates carrier is likely to maintain high inventory levels resulting in immense inventory stocking cost, opportunity loss, and vulnerability of theft (Brazil is extremely well known for thieveries)

Backorder Rate

The Backorder Rate KPI measures how many orders cannot be filled at the time a customer places them. This is measured by frequency of orders delivered later than the  the estimated delivery date as mentioned above. I have drawn a red dotted line for ideal minimum of keeping a good 5%.

 

Back Order Ratio =  (Number of Customer Orders Delayed due to Backorder / Total Number of Customer Orders Shipped) * 100

  • Rates is likely to increase during the peak season.
  • Some odd finding is correlation plot indicates backorder status has higher correlation to customer review scores than type of the product.

Extra

  1. Office furniture is one of the product category prone to have high lead time. This is mainly because sellers aren’t able to fulfill the orders right time. Knowing furniture may comparably longer manufacturing time, perhaps better forecasting strategy is required by the sellers.
  2.  Amapa, Roraima, Amazonas are top three states in Brazil for average delivery lead time > 25 days. (date range: 2016/09/15 – 2018/08/27)

Conclusion:

As we all know, biggest differentiator between Amazon and many other online retailers is the speed of delivery. This could be a big factor of reducing the customer churn. My goal now was how can we shorten duration of carrier shipping (GI) to the end customer (GR)?

Meaning, if we need to choose a specific city in Brazil for locating ideal warehouses or locate the carriers where could it be?

I have used K-Means Clustering algorithm to cluster geolocation coordinates of customer locations for finding the cluster centroid. The central position of each partitioned location of Brazil will indicate ideal location for the warehouses for each location in Brazil. Obviously, more the fulfillment center faster the delivery; however we want to be cost efficient and expectation maximization, that is determining the optimal number of clusters.

Deciding what is the minimum, and best value for the number of clusters is not too clear with over 100K geolocation data. The elbow method was used below to determine what will be the best value for K for clustering which can clarify the fuzziness of the coordinates. It looks at the percentage of variance for each given clusters. The cluster variation is calculated as the sum of the euclidean distance between the data points and their respective cluster centroid. The total variation within each cluster will be smaller than before and at some point the marginal gain will drop, providing an acute angle in the graph.

Below is the ideal carrier/warehouse locations per each cluster of 5.

Future work:

Earth isn’t flat: after doing the analysis, I have realized that my K-Means result should be reasonably accurate only near the equator. To cluster using shortest curve distances among the surface of the earth, density function such as HDBSCAN or DBSCAN algorithms can be used for spatial clustering.

Efficient Coding: Clustering part of this project is not reactive. Not to blame performance limitation of R and ShinyApp’s 1GB data restriction. Clustering performance was very slow and plotting over 100K point on ggplot would result in a shutdown. It will be good to find an option to bypass this problem.

 

LAST UPDATE: 5/14/2019

Author’s LinkedIn: Youngmin Paul Cho

 

 

To leave a comment for the author, please follow the link and comment on their blog: R – NYC Data Science Academy Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)