Vertica and R quickie – group_concat transform function

I’m guessing you can do this in C++, but hey… it works for me.

You’ll notice 3 functions that need to be created, the main function which will be called from Vertica, the params functions which is optional and holds the config of parameters optionally pased to the function. And finally, and mandatory, is the factory function which tells Vertica what to expect when running ‘group_concat’.

 

 

group_concat <- function(x,y) {

 options(useFancyQuotes=FALSE)

 # initialize parameters
 sep <- ifelse(is.null(y[['delimiter']]),',',as.character(y[['delimiter']]))
 if(is.null(y[['quote']])) y[['quote']] <- 0
 doQuote <- ifelse(y[['quote']]==1 ,TRUE,FALSE) 

 if(doQuote) ret <- paste0(sQuote(x[,1]),collapse=sep)
 else ret <- paste0(x[,1],collapse=sep)

 ret

}

group_concat_params <- function() {
 params <- data.frame(datatype=c("varchar", "int"), length=c(100,NA), scale=rep(NA,2),name=c("delimiter","quote"))
 params
}

group_concat_factory <- function() {
 list(
 name=group_concat
 ,udxtype=c("transform")
 ,intype=c("any")
 ,outtype=c("varchar(65000)")
 ,parametertypecallback=group_concat_params
 # ,volatility=c("stable")
 # ,strict=c("called_on_null_input")
 )
}

After distributiong the file into the same directory on all nodes.

In Vertica run :

 

create or replace library r_func as '.../group_concat.r' language 'R';
create or replace transform function group_concat as name 'group_concat_factory' library r_func;

select group_concat(id using parameters delimiter='|', quote=1) over() from sometable;

 

 

R and Vertica

I’ve been spending the last few months working my way through the integration of R and Vertica, and will try to keep here things that I find handy. I’m quite sad to see there is not much about this Vertica feature on the web, that’s a little disappointing. But, it didn’t stop us from creating a scalable statistical model learning machine out of this feature (I will write about it in later posts…).

For those of you who don’t know HP Vertica, it is a powerful columnar DBMS.  I’ve worked with two installations of it and me and my colleagues are very impressed with it. For those of you who are familiar with PostgreSQL, you will find many things similar  as both products were invented by the same guy.

That’s all the intro I’m going to give since we pay them and not the other way around :)

I suggest people reading this post to read through the chapter called “Developing a User Defined Function in R” in “HP Vertica 6.1.x (or higher) Programmer’s Guide” I hope you are familiar in vsql and R.

Short example – normal inverse function in Vertica

Vertica and R communicate via User Defined Functions (UDFs) that are written in the R language. The columns selected in Vertica are passed to R as a data.frame, the functions must treat their main argument as a data.frame. Here is an example that creates an inverse normal CDF function, x is the data.frame passed from Vertica, in this case it is a 3-column data.frame with the percentile, mean and standard deviation.

# normal inverse
norm_inv <- function(x) { # 1 - percentile, 2 - mean, 3 - sd
apply(x,1,function(i) qnorm(p=i[1L],mean=i[2L],sd=i[3L]))
}

norm_invFactory <- function() { # this is the factory function that 'links' between R and Vertica
list(
name=norm_inv
,udxtype=c("scalar")
,intype=c("float","float","float")
,outtype=c("float")
)
}

You create the file above and place somewhere on your machine, then load this function to the Vertica database :


MyDB=> CREATE LIBRARY r_func AS '/home/dbadmin/norm_inverse.R' LANGUAGE 'R';

MyDB=> CREATE FUNCTION norm_inv AS NAME 'norm_invFactory' LIBRARY r_func;

MyDB=> select norm_inv(.25,0,1);
norm_inv
--------------------
-0.674489750196082
(1 row)

More about User Defined Functions

Imagine being able to implicitly parallelize an R function across an infinite amount of segments and data. This is basically the promise behind the R language package for Vertica.

What is parallelized exactly? Vertica allows you to partition the data sent into R, it implicitly works out how to divide the load between nodes and the nodes’ cores, so you don’t have to work out elaborate code. Each R instance is run independetly from other instances, so you cannot parallelize ,say, a single lm() function, but rather perform multiple ones at once.

The user defined function are loaded into Vertica using CREATE LIBRARY syntax and by writing R code that has two main parts, as you might have noticed above:

  1. A main function – the main function is what does the work
  2. A factory function – tells the Vertica engine about the input and output to expect.

source() example

Using R’s source() function is possible through an R UDFs in Vertica. Which is a very useful ability for large projects with a lot of code you can then change the sourced code without changing any part of the main functions. Here is how.
Create some R file called ‘foo.R’ with a variable called ‘bar’.

## Start of R code
bar <- "we've done it!"
## End of R code

Create the main and factory functions for in an R file

# Main function
sourceTest <- function(x)
{
source('foo.R')
return(bar)
}
# Factory function
sourceTestFactory <- function()
{
  list(
   name=sourceTest
   ,udxtype=c("scalar")
   ,intype=c("char")
   ,outtype=c("char")
 )
}

In vertica run :

MyDB=> CREATE LIBRARY RsourceTest AS 'sourceTest.r' LANGUAGE 'R';
MyDB=> CREATE FUNCTION sourceTest AS NAME 'sourceTestFactory' LIBRARY RsourceTest;
MyDB=> SELECT sourceTest ("");
sourceTest
--------------------
we've done it!
(1 row)

Pre-requisites for the R language pack

The vertica programmer guide instructs you to install a version of libgfortran. Aside from that you may notice that packages that rely on gfortran such as “xts” require installing other parts :

yum install gcc-gfortran
yum install gcc-c++

Installing the ‘forecast’ package for Vertica

The ‘forecast’ package, by Rob J Hyndman, is great for many time series analysis. Connecting it with Vertica is very powerful for creating forecasts out of your data. Trouble is, the R version used in Vertica is 3.0.0 so you will have to get the older version of forecast 4.8. In short :

wget https://stat.ethz.ch/CRAN/contrib/main/Archive/forecast/forecast_4.8.tar.gz

And then in R:

install.packages("tseries")
install.packages("fracdiff")
install.packages("RcppArmadillo")
install.packages("/root/forecast/forecast_4.8.tar.gz")

Voila. Next time I’ll show some more elaborate examples of transform functions that I use.

A note about the estimating the number of visits to a site

Motivation

How many times do users visit your site? A very relevant question in web analysis.
One major issue that biases your estimation is the users who save no cookies, these users change their ID on every page visit so they look like 1 time visitors and thus inflate the proportion of 1 in the distribution. This has an enormous effect on the bounce rate you calculate to your site.
Another minor issue, which is a by product of the solution I’m about to offer, is that most distribution functions have a support of x>=0, while visits start at 1. Read on to see the suggested solution.

I’ve stumbled upon this issue a few times and would like to suggest the following solution to estimate the latent proportion of 1 time visitors via maximum likelihood estimation.

Assumptions

  1. There exists a constant proportion (P) of users who don’t save cookies.
  2. The number of visits to the site by users who do save cookies (X) follows some known distribution function (f) who’s parameters are unknown. The distribution is truncated at 0, as we will never see the users who visit 0 times. The CDF for f is denoted as F.

We define Y as the distribution of visits to your site :

Pr(y=1) = P + (1-P) * f(1) / (1-F(0))

For Y=y other than 1 :

Pr(Y=y) = (1-P) * f(y) / (1-F(0))

You might ask yourself why f(y) is  weighted with 1/(1-F(0)), since we are looking at the truncated version of the distribution this weighting factor is essential to keep the integral of the truncated distribution at 1. I’ll leave you to figure out the math.

Example

Here is an example when assuming the distribution f is Poisson.

first we construct the likelihood function:

Ind <- function(l) {
 if(is.logical(l)) as.numeric(l)
 else stop('"l" must be logical')
 }
fy <- function(x, lambda, log = FALSE) {
 if(!log) { Ind(x>=1) * dpois(x, lambda) / ppois(0, lambda, lower.tail = FALSE)}
 else { Ind(x>=1) * ( dpois(x, lambda, log=TRUE) - ppois(0, lambda, lower.tail = FALSE, log=TRUE))}
 }
Lik <- function(par,x) {
 p <- par[1]
 lambda <- par[2]
 n1 <- sum(x==1)
 n0 <- sum(x!=1)
 x0 <- x[x!=1]
n1 * log(p + (1-p) * fy(1,lambda)) + n0 * log(1-p) + sum(fy(x0,lambda,log=TRUE))
 }

Let’s create some data :

x <- c(rpois(5000,5),rep(1,600)) # a sample of 5000 values from a Poisson distribution with lambda = 5, artificially inflated with 600 1s.
par <- c(mean(x==1),mean(x)) # starting values for the estimation process
parOptim <- optim(par,fn=Lik,x=x,method="Nelder-Mead",control=list(fnscale=-1)) # using optim() to find the MLE
parOptim$par[1]
[1] 0.1092563

This means the estimated P is 10.9% of inflated one time visitors. Be sure to remove these guys from the distribution of visitors you get. But be sure to test a few distributions and find a way to judge between them prior to estimating the final numbers.

Good luck.

Coalesing in R

The coalesce function is a recursive null filler very common in every database software, however R seems to be missing this simple function. Here is my suggestion :

coalesce <- function(x,...) {

  fillerList <- list(...)
    y <- try(y <- unlist(..1))
    if(class(y)=="try-error" | length(y)==0L) {
        x <- x 
    }
    else if(length(y)==1L) {
     x[is.na(x)] <- y
    }
    else {
     x[is.na(x)] <- y[is.na(x)]
    }
    # recursion
    if(length(fillerList)-1L<=0L) {return(x)}
    else {return(coalesce(x,fillerList[-1]))}
}

An example of linear discriminant analysis

The following example was shown in an advanced statistics seminar held in tel aviv. The material for the presentation comes from C.M Bishop’s book : Pattern Recognition and Machine Learning by Springer(2006).

One way of separating 2 categories using linear sub spaces of the input space (e.g. planes for 3D inputs, lines for 2D inputs, etc.) is by dimensionality reduction:

y=w’x

if x belongs to a P dimensional input space of real numbers, and w is a P on 1 vector of weights then y is the projection of X to the P-1 sub-plane. We assign x to C1 if y>0 and to C2 otherwise. The group of  x‘s that hold y=0 are called the “Decision Boundary”.

Other features of this method are listed in the book, one which is important for the presentation is that each vector on the sub-plane is orthogonal to w. Proof : let x1, x2 be points on the decision boundary separating R1 and R2. Since y=0 for all these points we conclude :

w’x1=w’x2=0

w’(x1-x2)=0

So w is orthogonal to the decision boundary by definition.

Assume with we have two groups following a normal distribution with different means and similar variance co-variance matrices Σ :

Given the mean points M1,M2. A good strategy might be to find w so the projection of the means m1 and m2 will reach maximum separation. By putting the constraint w’w=1 and using lagrange multipliers we find that w is proportional to (M1-M2). The problems begins when the groups have diagonal covariances and the projected values overlap. This can happen even if the two means are fully separated. Fisher (1926) wrote the criterion for this solution, with w proportional to inverse(Σ)(M1-M2).

By looking at the histograms of the projected values we get two histograms of a linear transformation of normal bi-variate variables, hence also a normal distribution. The code is attached, follow it step by step and adjust the device size and sampling parameters to see how Fisher’s criterion for w is superior to the ordinary optimization.

# (1) sampling assuming mvnorm with same Sigma
{
rm(list=ls())
mu.1 <- c(2,-1)
mu.2 <- c(2,5)
rho <- 0.8
sigma.1 <- 1
sigma.2 <- 3
Sigma <- matrix(c(sigma.1^2
,rho*sigma.1*sigma.2
,rho*sigma.1*sigma.2,sigma.2^2),byrow=T,nrow=2)
N <- 100

# multivariate normal sampling
X1 <- MASS::mvrnorm(N,mu=mu.1,Sigma=Sigma)
X2 <- MASS::mvrnorm(N,mu=mu.2,Sigma=Sigma)

# make a data frame
X <- data.frame(cbind(rep(c(4,2),each=N),rbind(X1,X2)))
names(X) <- c("group","X1","X2")

means <- matrix(c(tapply(X$X1,X$group,mean),tapply(X$X2,X$group,mean)),2,2)
means <- data.frame(X1=means[,1],X2=means[,2],row.names=c(2,4))

A <- matrix(NA,nrow=nrow(X),ncol=2)
A[,1] <- as.numeric(X$X1) ;A[,2] <- as.numeric(X$X2)}
# (2) plot the sample
{
PLOT <- function(main) {

layout(matrix(c(1,1,2,3),byrow=T,ncol=2),heights=c(0.2,0.8))
par(mar=rep(0,4))
plot.new()
text(0.5,0.5,main,cex=3)
par(mar=c(4,4,0,0))
plot(X2~X1,data=X,pch=21,bg=1
,type="n"
,xlab=expression(x[1]),ylab=expression(x[2]))
axx <- par("xaxp")
axx <- seq(axx[1],axx[2],length.out=axx[3]+1)
axy <- par("yaxp")
axy <- seq(axy[1],axy[2],length.out=axy[3]+1)
abline(h=axy,v=axx,lty=5,col=gray(0.8))
points(X2~X1,pch=21,bg=group,col=group,data=X)}
PLOT(main="Simple Linear Classification")}
# (3) show means and mean line
{
points(means,col=1,pch=3,cex=3,lwd=3)
lines(means,lwd=3,col=1)}
# (4) calculate midpoint and orthogonal line
{
mid.point <- apply(means,2,mean)
points(mid.point[2]~mid.point[1],col=1,pch=4,cex=2,lwd=4)
H <- c(-1,1)
m <- H %*% as.matrix(means)
m <- m[2]/m[1]
inv.m <- solve(m,-1)
arrows(
x0=mid.point[1],x1=mid.point[1]+2
,y0=mid.point[2],y1=mid.point[2]+2*inv.m
,col=1,lwd=3)}   
# (5) Simple Linear Optimization
{
SLO <- function() {
lambda <- sqrt((as.matrix(means[1,]-means[2,])) %*% t(as.matrix(means[1,]-means[2,])))/2
w <- (means[2,]-means[1,])/(2*lambda)

w <- matrix(as.numeric(w))

wX <- A %*% w
wX <- data.frame(group=X$group,wX=wX)

h4 <- hist(wX$wX[wX$group==4],plot=F)
h2 <- hist(wX$wX[wX$group==2],plot=F)

breaks <- sort(union(h4$breaks,h2$breaks))
counts4 <- c(rep(0,length(breaks)-length(h4$counts)-1),h4$counts)
counts2 <- c(h2$counts,rep(0,length(breaks)-length(h2$counts)-1))
mids <- sort(union(h4$mids,h2$mids))

h2$breaks <- h4$breaks <- breaks
h2$mids <- h4$mids <- mids
h4$counts <- counts4

plot(h2,border=0,main=""
,xlab="Projection Values",ylim=c(0,max(counts4,counts2)))
rect( # blue group
xleft=breaks[-length(breaks)]
,ybottom=0
,xright=breaks[-1]
,ytop=counts4
,col=4,density=20,angle=45)
rect( # red group
xleft=breaks[-length(breaks)]
,ybottom=0
,xright=breaks[-1]
,ytop=counts2
,col=2,density=20,angle=-45)
} # close on SLO
SLO()}
# (6) Fisher's Criterion
{
means <- matrix(c(tapply(X$X1,X$group,mean),tapply(X$X2,X$group,mean)),2,2)
b <- as.numeric(solve(cov(A)) %*% (means[1,]-means[2,]))
b0 <- 0.5*((t(means[1,]-means[2,]) %*% solve(cov(A))) %*% (means[1,]-means[2,]))
Lx <- (A %*% b)+b0*rep(1,N)}
# (7) re-plot the sample
{
PLOT(main="Fisher's Criterion")}
# (8) plot Fisher's Criterion
{
x <- par("xaxp")
x <- seq(100*x[1],100*x[2],length.out=2)
b1 <- b[1]
b2 <- b[2]
y <- (-b0-b1*x)/b2
lines(y~x,lwd=3)}
# (9) plot Projection Histogram
{
FC <- function() {
L4 <- hist(Lx[X$group==4],plot=F)
L2 <- hist(Lx[X$group==2],plot=F)

Lbreaks <- sort(union(L4$breaks,L2$breaks))
Lcounts4 <- c(rep(0,length(Lbreaks)-length(L4$counts)-1),L4$counts)
Lcounts2 <- c(L2$counts,rep(0,length(Lbreaks)-length(L2$counts)-1))
Lmids <- sort(union(L4$mids,L2$mids))

L2$breaks <- L4$breaks <- Lbreaks
L2$mids <- L4$mids <- Lmids
L4$counts <- Lcounts4

plot(L2,border=0,main=""
,xlab="Projection Values",ylim=c(0,max(Lcounts4,Lcounts2)))
rect( # blue group
xleft=Lbreaks[-length(Lbreaks)]
,ybottom=0
,xright=Lbreaks[-1]
,ytop=Lcounts4
,col=4,density=20,angle=45)
rect( # red group
xleft=Lbreaks[-length(Lbreaks)]
,ybottom=0
,xright=Lbreaks[-1]
,ytop=Lcounts2
,col=2,density=20,angle=-45)
} # close on FC
FC()}

Matching family names including typos

In order to build a database of relationships between credit card owners  in my workplace, I’ve come up with the following SAS code on a lazy day’s work.

The problem, as I saw it, was that a lot (20%) of bank accounts had the same family name for two different card holders, hence, it is very reasonable to assume they are family. But what happens if GrinShtein is written as GreenShtein , an infinite amount of typos obscured the true amount of family members.

Given the fact that the bank accounts with more than 1 member were probably family related, I had to see which names were similar up to some metric.

I chose the Levenshtein Distance as the metric to measure the dissimilarity between two names.  Levenshtein’s distance (L) counts the minimum number of edits  (insertions, substitutions and deletions) needed to transform String1 to String2.

I’ve graded each pair of names with :

Grade=L/max(c1,c2)

where c1 and c2 are the number of characters in each String.

Next I had to use a macro to compare names of 3 and more members of a bank account. The macro’s flow is as follows :

  1. I used the %combo macro to make a list of pairs indices to choose from iteratively, those were written into two macro “+” separated macro variables v_1 and v_2.
  2. If the number of distinct ID holders in the bank account is K then the number of iterations is N=k*(k-1)/2, for each iteration I graded a different pair of names and held the result according to a Boolean rule in an array (Pair array) of size N.
  3. For each bank account I calculated the number of similar names by using the sum of the array,  and the number K with the function : min(K,sum(array)). Later, I substituted 1′s to 2′s because a pair is made of two people. So the values that the similar names receives is S=0,2,…,K.
  4. I calculated the homogeneity : H=S/K.

if H=0 then no names are similar,  if H=0 then all names are similar up to a threshold of the Grade.

I used a threshold of both Grade and max(c1,c2), since smaller names tend to be more volatile. I guess each language has its different type of typos and errors in spelling names. Hebrew, in my case, is prone to dropping and adding vowels (which can result in the same name),  substitution of consonants which appear similar to the ear and keyboard.

Results were excellent, Thousands of solid households were discovered and added to the database.

The Code for the program will be shown in a few days…

Finding out (fast) the classes of data.frame vectors

Sometimes it’s useful to write down the various classes of vectors inside your data.frame objects for documentation and other people to use it.

I’ve searched for a quick way to find out all the classes of vectors inside a data.frame.

Since I’ve found no reference for such a function/process I made one up.

I’d like to hear what people have to say about the following use of the “class” function on data.frames

a simple call :

> library(rpart) # comes with R
> data(kyphosis) # comes with rpart
> class(kyphosis)

"data.frame"

trying to use the “apply” function to know what classes are the columns in the data.frame yeilds the following unwanted result :

> apply(kyphosis,2,class)
Kyphosis         Age      Number       Start 
"character" "character" "character" "character" 

For some reason the apply function returns “character” on all vectors regardless of their true content (any ideas why?).

Anyhow, after some thought I’ve come up with the following function :

> allClass <- function(x) {unlist(lapply(unclass(x),class))}
> allClass(kyphosis)
Kyphosis       Age    Number     Start 
 "factor" "integer" "integer" "integer"

Compact, fast and quite useful. Of course the control flow needs more work to fit other classes and recognize when x is not a data.frame.

Comments are welcome.