Articles by R on Just be-cause

Sometimes more data can hurt!

May 22, 2021 | R on Just be-cause

Photo by Ben White on Unsplash So here’s a mind blower: In some cases having more samples can actually reduce model performance. Don’t believe it? Neither did I! Read on to see how I demonstrate that phenomenon using a simulation study. Some cont...
[Read more...]

dtplyr speed benchmarks

May 25, 2020 | R on Just be-cause

R has many great tools for data wrangling. Two of those are the dplyr and data.table packages. When people wonder which one should they learn it is often argued that dplyr is considerably slower compared with data.table. Granted, data.table is blazing fast, but I personally find the ...
[Read more...]

dtplyr speed benchmarks

May 25, 2020 | R on Just be-cause

R has many great tools for data wrangling. Two of those are the dplyr and data.table packages. When people wonder which one should they learn it is often argued that dplyr is considerably slower compared with data.table. Granted, data.table is blaz...
[Read more...]

dowhy library exploration

April 19, 2020 | R on Just be-cause

It is not often that I find myself thinking “man, I wish we had in R that cool python library!”. That is however the case with the dowhy library which “provides a unified interface for causal inference methods and automatically tests many assumptions, thus making inference accessible to non-experts”. Luckily ...
[Read more...]

dowhy library exploration

April 19, 2020 | R on Just be-cause

It is not often that I find myself thinking “man, I wish we had in R that cool python library!”. That is however the case with the dowhy library which “provides a unified interface for causal inference methods and automatically tests many assumptio...
[Read more...]

Automatic DAG learning – part 2

January 20, 2020 | R on Just be-cause

Intro We’ve seen on a previous post that one of the main differences between classic ML and Causal Inference is the additional step of using the correct adjustment set for the predictor features. In order to find the correct adjustment set we ne...
[Read more...]

Automatic DAG learning – part 2

January 20, 2020 | R on Just be-cause

Intro We’ve seen on a previous post that one of the main differences between classic ML and Causal Inference is the additional step of using the correct adjustment set for the predictor features. In order to find the correct adjustment set we need...
[Read more...]

Automatic DAG learning – part 1

October 16, 2019 | R on Just be-cause

I was really struggling with finding a header pic for this post when I came across the one above - titled “Dag scoring and selection” and since it’s sort of the topic of this post I decided to use it! Intro On my second post I’ve stressed ...
[Read more...]

Automatic DAG learning – part 1

October 16, 2019 | R on Just be-cause

I was really struggling with finding a header pic for this post when I came across the one above - titled “Dag scoring and selection” and since it’s sort of the topic of this post I decided to use it! Intro On my second post I’ve stressed how ...
[Read more...]

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)