# Removing Records by Duplicate Values in R – An Efficiency Comparison

December 20, 2012
By

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

After posting “Removing Records by Duplicate Values” yesterday, I had an interesting communication thread with my friend Jeffrey Allard tonight regarding how to code this in R, a combination of order() and duplicated() or sqldf().

Afterward, I did a simple efficiency comparison between two methods as below. The comparison result is pretty self-explanatory. In terms of “user time”, dedup1() is at least 10 times more efficient than dedup2().

```> library(sqldf)
> cat(nrow(df1), ncol(df1), '\n')
13444 14
> # DEDUP WITH ORDER() AND DUPLICATED()
> dedup1 <- function(n){
+   for (i in 1:n){
+     df12 <- df1[order(df1\$MAJORDRG, df1\$INCOME), ]
+     df13 <- df12[!duplicated(df12\$MAJORDRG), ]
+   }
+ }
> # DEDUP WITH SQLDF()
> dedup2 <- function(n){
+   for (i in 1:n){
+     df22 <- sqldf("select * from df1 order by MAJORDRG, INCOME")
+     df23 <- sqldf("select a.* from df22 as a inner join (select MAJORDRG, min(rowid) as min_id from df22 group by MAJORDRG) as b on a.MAJORDRG = b.MAJORDRG and a.rowid = b.min_id")
+   }
+ }
> # RUN BOTH METHODS 100 TIMES AND COMPARE CPU TIMES
> system.time(dedup2(100))
user  system elapsed
22.581   1.684  26.965
> system.time(dedup1(100))
user  system elapsed
1.732   0.080   2.033
```

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.