Sunday, September 27, 2015

A Data Cleaning Example

For this particular example,

  • the variables of interest are stored as key:value pairs and
  • a single data cell could contain multiple (unknown) number of key:value pairs.
Basically, we want to convert input dataset on LHS to the output dataset on the RHS as illustrated in the graphic below -



The objective is to separate these key-value pairs and store the values in corresponding key columns.

The hadleyverse packages make this task a fairly simple one, especially tidyr, stringr and magrittr.

Thursday, August 13, 2015

Survival Analysis - 2

In my previous post, I went over basics of survival analysis, that included estimating Kaplan-Meier estimate for a given time-to-event data. In this post, I'm exploring on Cox's proportional hazards model for survival data. KM estimator helps in figuring out whether survival function estimates for different groups are same or different. While survival models like Cox's proportional hazards model help in finding relationship between different covariates to the survival function.

Some basic notes about Cox model -

  • It's a semi-parametric model, as the hazard function (risk per unit time) does not need to be specified.
  • The proportional hazard condition states that the covariates are multiplicatively related to the hazard. (This assumption should always be checked after fitting a Cox model). 
  • In case of categorical covariates, the hazard ratio (e.g. treatment vs no treatment) is constant and does not change over time. One can do away with this assumption by using extended Cox model which allows covariates to be dependent on time.
  • The covariates are constant for each subject and do not vary over time.

(There's one-to-one mapping between hazard function and the survival function i.e. a specific hazard function uniquely determines the survival function and vice versa. Simple mathematical details on this relationship can be found on this wikipedia page.)

I'm using the same datasets (tongue dataset from package KMsurv and a simulated dataset using survsim) and set of packages as used in the previous post - OIsurvdplyrggplot2 and broom .




Sunday, August 2, 2015

Survival Analysis - 1

I recently was looking for methods to apply to time-to-event data and started exploring Survival Analysis Models. In this post, I'm exploring basic KM estimator. It is a nonparametric estimator of the survival function. There are couple of instances when the KM estimator comes in handy -
  • When the survival time is censored
  • Comparing survival function for different preassigned groups.

Below I'm computing KM estimator for a real dataset (on time to death for 80 males who were diagnosed with different types of tongue cancer, from package KMsurv) and a simulated dataset (created using package survsim). In addition I am using survivalOIsurv, dplyr, ggplot2 and broom for this analysis. The first example is taken from an openintro tutorial.

The rmarkdown document illustrating below analysis can also be found here. In my future posts, I'm planning to explore more on following survival models -
  • Proportional hazards model
  • Accelerated failure time Model
  • Multiple events model (More than 2 possible events)
  • Recurring events (Each subject can experience an event multiple times).



Monday, June 8, 2015

ogdindiar: R package to easily access Open Government Data from India Portal

Following up on my earlier posts on accessing Open Government Data from R, I've wrapped this code into an R package - ogdindiar. It's available on GitHub at https://github.com/steadyfish/ogdindiar

It provides one simple function - fetch_data() to download required data resource from the https://data.gov.in portal. You can find the details about the usage in this vignette.

Below is an example that downloads India's annual and seasonal mean temperature data using this package. You can also see it here.

Monday, March 16, 2015

dplyr Use Cases: Non-Interactive Mode

The current release of dplyr (v 0.4.1) offers lot more flexibility regarding usage of important verbs in non-interactive mode. In this post, I'm exploring different possible use-cases.


  • group_by_, select_, rename_:
For group_by_, select_ and rename_, we can pass a character vector of variable names as an argument to .dots parameter.


  • filter_:
To use filter_ function, we need to pass filter criteria as a parameter to .dots. The criteria can be created using lazyeval::interp function.


  • mutate_, transmute_, summarise_:
We need to provide 2 things to these functions - a list of functions to be applied on the input variables (with corresponding input variables) and a character vector of output variables names. These 2 things can be passed to the .dots argument using combination of lazyeval::interp  and setNames function.

  • joins:
For 2 table verbs, there's no *_join_ function and we don't need one for general purposes. We can just pass a named vector to by argument. setNames function comes in handy while doing this.


The R Code for the above mentioned use cases is shown below and can also be found on this GitHub Gist.


# using dplyr finctions in non-interactive mode
# examples
library(plyr)
library(dplyr)
d1 = data_frame(x = seq(1,20),y = rep(1:10,2),z = rep(1:5,4))
head(d1)
#### single table verbs ####
# group_by
group_by_fn <- function(d_in,gp_vec){
d_out = d_in %>%
group_by_(.dots = gp_vec)
}
gp_vec = c("y","z")
d1_gp_by_out = group_by_fn(d1,gp_vec)
head(d1_gp_by_out)
# select/rename (haven't included drop variables case)
select_fn <- function(d_in,sel_vec){
d_out = d_in %>%
select_(.dots = sel_vec)
}
sel_vec = c("x","y")
d1_select_out = select_fn(d1,sel_vec)
head(d1_select_out)
# filter
filter_fn <- function(d_in,filter_crit){
d_out = d_in %>%
filter_(filter_crit)
}
y_vec = 6:8
filter_crit = interp(~ filter_var %in% y_vec,filter_var = as.name("y"))
d1_filter_out = filter_fn(d1,filter_crit)
head(d1_filter_out)
z_vec = 1:2
filter_crit2 = interp(~ filter_var1 %in% y_vec & filter_var2 %in% z_vec,.values = list(filter_var1 = as.name("y"),
filter_var2 = as.name("z")))
d1_filter2_out = filter_fn(d1,filter_crit2)
head(d1_filter2_out)
# mutate, transmute, summarise
mutate_fn <- function(d_in,op_ls,var_vec){
d_out = d_in %>%
mutate_(.dots = setNames(op_ls,var_vec))
}
var1_rng = 3:5
op_ls = list(interp(~f(var1,var2), .values = list(f = as.name("*"),
var1 = as.name("x"),
var2 = as.name("y"))),
interp(~f(var1 %in% var1_rng,var2,var3),.values= list(f = as.name("ifelse"),
var1 = as.name("x"),
var2 = as.name("y"),
var3 = as.name("z"))))
var_vec = c("yy","zz")
d1_mutate_out = mutate_fn(d1, op_ls, var_vec)
head(d1_mutate_out)
var_ls = list("yy","zz")
d1_mutate_out1 = mutate_fn(d1, op_ls, var_ls)
head(d1_mutate_out1)
#### two table verbs ####
# joins
d2 = data_frame(xx = seq(1,20),yy = rep(1:10,2),zz = rep(1:2,10))
join_fn <-function(d_in1,d_in2,var_vec1,var_vec2){
d_out = d_in1 %>%
left_join(d_in2,setNames(var_vec2,var_vec1))
}
var_vec1 = c("x","y")
var_vec2 = c("xx","yy")
d_join_out = join_fn(d1,d2,var_vec1,var_vec2)
head(d_join_out)
# everything combined (essentially, power of %>%)
d_combined_out = d1 %>%
filter_fn(filter_crit) %>%
group_by_fn(gp_vec) %>%
mutate_fn(op_ls,var_vec) %>%
select_fn(c("x","y","z")) %>%
join_fn(.,d2,var_vec1,var_vec2)
head(d_combined_out)
# sources:
# http://cran.r-project.org/web/packages/dplyr/vignettes/nse.html
# http://stackoverflow.com/questions/28125816/r-standard-evalation-for-join-dplyr

Tuesday, April 15, 2014

Accessing Open Data Portal (India) using APIs


EDIT: I've wrapped up this code into an R package. You can find more info about it on this blog post and here on GitHub.


As I mentioned in my previous blog post, Government of India have started an Open Data Portal for making various data public. Most of the data-sets on the portal are available for manual download. Some of the data-sets though are also available to be accessed using APIs. In this post, I'll go over how to access that data using APIs (specifically JSON API) in R.

Again, the variety of R packages available makes this a not so difficult task. I've made use of mainly these packages - XMLRCurl, RJSONIO, plyr.

The complete process can be described in following steps -
  1. Get the resource id, API key to make the API call
  2. Recursively call API until all the data is obtained
  3. Append all the data creating a single data-set.
Now, I'll describe in details each of the above steps. The resource id is the identifier for the dataset and can be found on the website (For e.g. resource-id 6176ee09-3d56-4a3b-8115-21841576b2f6 refers to dataset on the pin-code details). Another mandatory detail when making an API call is the API key. This key can be obtained by signing up on this data portal. Thus, the API URL would look something like this -

http://data.gov.in/api/datastore/resource.json?resource_id=6176ee09-3d56-4a3b-8115-21841576b2f6&api-key=<your API key>

The content of this URL can be downloaded into R by using getURL() function. Currently, there's a limit of 100 elements that can be downloaded in a single API call. This necessitates the 2nd step - making recursive API calls until all elements have been downloaded. For accomplishing this we can add one more offset parameter to the URL. The URL would now look like - 

http://data.gov.in/api/datastore/resource.json?resource_id=6176ee09-3d56-4a3b-8115-21841576b2f6&api-key=<your API key>&offset=1

Here offset signifies the number of calls. For e.g. if in each call we are downloading 100 data elements; after downloading the 1st set of 100 elements, we'd specify offset=1 to download elements 101-200.

The data thus obtained using the recursive API calls can be converted to data.frame using ldply() and each data.frame can be combined into a master data.frame using rbind().

Following GitHub Gist describes the actual R code. You can also look at my GitHub project to proper understand the directory structure used in the code.

####Download the data from Government of India open data portal#####
w_dir = getwd()
source(file=file.path(w_dir,"Code/Core.R"))
checkAndDownload(c("XML","RCurl","RJSONIO","plyr"))
### Alternative - 1: Using APIs ###
#JSON#
getJSONDoc <- function(link, res_id, api_key, offset, no_elements){
jsonURL = paste(link,
"resource_id=",res_id,
"&api-key=",api_key,
"&offset=",offset,
"&limit=",no_elements,
sep="")
print(jsonURL)
doc = getURL(jsonURL)
fromJSON(doc)
}
getFieldNames <- function(t){
#t: list
names(t[[4]])
}
getCount <- function(t){
#t: list
t[[3]]
}
getFieldType<-function(t){
t[[4]]
}
getData <- function(t){
t[[5]]
}
toDataFrame <- function(lst_elmnt){
as.data.frame(t(unlist(lst_elmnt)), stringsAsFactors = FALSE)
}
acquire_x_data <- function(x,res_id,api_key){
currentItr = 0
returnCount = 1
while(returnCount>0){
JSONList = getJSONDoc(link="http://data.gov.in/api/datastore/resource.json?",
res_id=res_id,
api_key=api_key,
offset=currentItr,
no_elements=100)
DataStage1 = ldply(getData(JSONList),toDataFrame)
print(currentItr)
print(is(DataStage1$id))
returnCount = getCount(JSONList)
if(currentItr == 0) {
returnData = DataStage1
returnFieldType = ldply(getFieldType(JSONList),toDataFrame)
}
else if(returnCount > 0) returnData = rbind(returnData, DataStage1)
print(currentItr)
print(is(returnData$id))
currentItr = currentItr + 1
}
list(returnData,returnFieldType)
}
#get the resource list file
#(it has resource names and resource ids used for the API call)
resourceList = read.table(
file=file.path(w_dir,"Data/goi_api_resource_details.csv"),
header=TRUE,
sep=",",
as.is=TRUE)
api_key = read.table(
file=file.path(w_dir,"Data/goi_api_key_do_not_share.csv"),
header=TRUE,
sep=",",
as.is=TRUE)
#make the API call
res = subset(resourceList, resource_name == "pincode")
pincodeDetails = acquire_x_data(x = res[1], res_id = res[2], api_key = api_key)
save(pincodeDetails, file=file.path(w_dir,"Data/pincodeDetails.RData"))
view raw GOIDataInput.R hosted with ❤ by GitHub



Sunday, February 2, 2014

Know India through Visualisations - 1

I'm going to produce just a couple of charts, a teaser of sorts in this post. In the forthcoming posts I'll dig deeper.

I was amazed with the existing list of R packages to work with spatial data, without needing to get into much of the technical details. Various R packages I've used are described along with the code.

I've obtained the state level power supply position data for the November 2004 (just a random choice) from the data portal of the government of India website. The spatial data for India with state boundaries was obtained from Global Administrative Areas website.



Above plot is generated using spplot() function from sp package, below is a similar plot generated using ggplot() function from ggplot2 package. In the plot, darker shades of blue signify higher severity of electricity shortage and lighter shades signify lower severity as can be seen from the legend. The numbers in the legend are in MU i.e. Million Units (equivalent to gigawatt hour).


The advantage of using ggplot() is that I can add additional layers onto this map easily. For e.g. I can add labels of the states as can be seen below.


The R Code for this post is shown below and can also be found on this GitHub Gist
######## Packages ########
checkAndDownload<-function(packageNames) {
for(packageName in packageNames) {
if(!isInstalled(packageName)) {
install.packages(packageName,repos="http://lib.stat.cmu.edu/R/CRAN")
}
library(packageName,character.only=TRUE,quietly=TRUE,verbose=FALSE)
}
}
isInstalled <- function(mypkg){
is.element(mypkg, installed.packages()[,1])
}
packages <- c("sp","ggplot2","plyr","rgeos","maptools","sqldf","RColorBrewer")
checkAndDownload(packages)
# 'sp' a package for spatial data
# 'plyr' required for fortify which converts 'sp' data to
#polygons data to be used with ggplot2
# 'rgeos' required for maptools
# 'maptools' required for fortify - region
################ Data Input ##############
#get the nation-wide map data with state boundaries
con <- url("http://biogeo.ucdavis.edu/data/gadm2/R/IND_adm1.RData")
load(con)
close(con)
#get the data from indian government data website about powerSuply position for a particular month
data_url = "http://data.gov.in/access-point-download-count?url=http://data.gov.in/sites/default/files/powerSupplyNov04.csv&nid=34321"
nov04 <- read.table(
file=data_url,
header=TRUE,
sep=",",
as.is=TRUE)
power = nov04
#get the state codes file
#(data dictionary with indian states mapped to a 2 letter state code)
stateCodes <- read.table(
file="D:/JustAnotherDataBlog/Data/IndiaStateCodes.csv",
header=TRUE,
sep=",",
as.is=TRUE)
########## Adhoc Data cleaning #########
#renaming the important columns
colnames(power)
colnames(power)[1] = "State"
colnames(power)[2] = "Demand"
colnames(power)[3] = "Supplies"
colnames(power)[4] = "Net Surplus"
colnames(power)
#adjusting power supply position in these states based on geographic area
#area numbers obtained from wikipedia
WB_Area <- 88752
Sikkim_Area <- 7096
Jharkhand_Area <-79714
WB_Sikkim_Prop <- WB_Area / (WB_Area + Sikkim_Area)
WB_Jharkhand_Prop <- WB_Area / (WB_Area + Jharkhand_Area)
#1.West Bengal and Sikkim to be split in some ratio:ratio of areas
WB1 <- power[power[, "State" ]== "W B + Sikkim" , -1] * WB_Sikkim_Prop
SK <- power[power[, "State" ]== "W B + Sikkim" , -1] - WB1
#2.DVC numbers to be divided between Jharkhand and West Bengal: by ratio of areas
WB2 <- power[power[, "State" ]== "DVC" , -1] * WB_Jharkhand_Prop
power[power[, "State" ]== "Jharkhand" , -1] <- power[power[, "State" ]== "Jharkhand" , -1] +
power[power[, "State" ]== "DVC" , -1] - WB2
WB = WB1 + WB2
#3. Tripura: not present in some of the datasets
#4. Andaman and Nicobar, Lakshdweep not part of the analysis
power = rbind(power,cbind(State="West Bengal",WB),cbind(State="Sikkim",SK))
######### Preparing data for visualisation ###########
#merge power data with state codes file
power_stateCodes <- sqldf("select a.*,b.* from stateCodes as a inner join power as b on a.State = b.State")
#cross check
sum(power_stateCodes$Demand) == power[power[, "State" ]== "All India" , 2]
power_stateCodes$Net_check = power_stateCodes$Supplies - power_stateCodes$Demand
sum(power_stateCodes$Net_check - power_stateCodes$Net_Surplus)
as.data.frame(gadm)
gadm@data
states <- as.data.frame(gadm@data$NAME_1)
colnames(states) ="State_gadm"
states_stateCodes <- sqldf("select a.*,b.* from states as a
left join stateCodes as b on a.State_gadm = b.State")
power_states <- sqldf("select a.State_gadm, a.Code, b.Demand,
b.Supplies, b.Net_Surplus from states_stateCodes as a
left join power_stateCodes as b on a.Code = b.Code")
power_states$log_Deficit <- log(-power_states$Net_Surplus+1)
breaks = quantile(power_states$log_Deficit,na.rm=TRUE)
breaks = 1 - exp(breaks)
power_states$Severity = cut(power_states$Net_Surplus,breaks,include.lowest =TRUE)
#with(power_states,power_states[Code == 'MH',])
#crosscheck as we can't merge using name
sum(gadm@data$NAME_1==power_states$State_gadm)==nrow(gadm@data)
gadm <- spCbind(gadm,power_states)
plotclr <- rev(brewer.pal(length(levels(gadm@data$Severity)),"Blues"))
#using spplot
png(file="D:/JustAnotherDataBlog/Plots/FirstPost_PS_Pos_Nov_04_Ind_spplot.png",width=500,height=400)
spplot(gadm, "Severity",
col.regions=plotclr
,main="India: Net Power Supply Position for Nov 2004 (MU)")#, col="transparent")
dev.off()
#using ggplot
india <- fortify(gadm, region = "NAME_1")
#india.df <- join(india, gadm@data, by="NAME_1") doesn't work because id cols are different
names(india)
temp <- gadm@data
india.df <- sqldf("select a.* ,b.* from india as a
left join temp as b on a.id = b.Name_1")#it's not case sensitive
S_Code <- aggregate(cbind(long, lat) ~ Code, data=india.df, FUN=function(x)mean(range(x)))
S_Code <-ddply(india.df, .(Code), function(df) kmeans(df[,1:2], centers=1)$centers)
png(file="D:/JustAnotherDataBlog/Plots/FirstPost_PS_Pos_Nov_04_Ind_ggplot.png",width=500,height=400)
p <- ggplot(india.df,aes(long,lat)) +
geom_polygon(aes(group=group,fill=Severity),color='white') +
geom_text(data=S_Code,aes(long,lat,label=Code), size=2.5) +
#geom_path(color="white") +
coord_equal() +
scale_fill_brewer() +
ggtitle("India: Net Power Supply Position for Nov 2004 (MU)")
p
dev.off()