Two Common Catches in R Programming

Sometimes, the scripts you created gives you a big surprise due to some subtle differences of the command. Here are two common difficult to catch traps in R programming.

1. which vs. %in% during subset dataset

which

df <- data.frame(a = runif(5), d = runif(5), animals = c('dog','cat','snake','lion','rat'), z = 1:5)
results1 <- df[, -which(names(df) %in% c("a","d"))]  # works as expected
# how about this one
results2 <- df[, -which(names(df) %in% c("b","c"))]  # surprise! All data are gone

%in%

df <- data.frame(a = runif(5), d = runif(5), animals = c('dog','cat','snake','lion','rat'), z = 1:5)
results1 <- df[, !names(df) %in% c("a","d")]  # works as expected
# how about this one
results2 <- df[, !names(df) %in% c("b","c")]  # returns the un-altered data.frame

Another fast way to drop columns is assign it to NULL

dropVec <- c('a','d')
df[dropVec] <- list(NULL)

2. Missing parathesis ()

Look at the following examples, you would expect it to print 1:9, right? Instead, it is print i-1.

n <- 10
for (i in 1:n-1) {
  print(i)
}
## [1] 0
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
## [1] 6
## [1] 7
## [1] 8
## [1] 9
n <- 10
for (i in 1:(n-1)){
  print(i)
}
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
## [1] 6
## [1] 7
## [1] 8
## [1] 9

Or check out my Rpub: http://rpubs.com/euler-tech/303265
Advertisements

Common challenges while aggregating data with multiple group IDs and functions in R

While analyzing a dataset, one of the most common tasks will be looking at the data features in an aggregated way. For example, aggregate the dataset by its year, month, day, or IDs, etc. Then you might want to look at the aggregated effects using the aggregate functions, not only one but multiple (say Min, Max, count etc).

There are a couple of ways to do it in R:

  • Aggregate each function separately and merge them.

agg.sum <- aggregate(. ~ id1 + id2, data = x, FUN = sum)

agg.min <- aggregate(. ~id1 + id2, data = x, FUN = min)

merge(agg.sum, agg.min, by  = c(“id1”, “id2”)

  • Aggregate all at a once using ‘dplyr’

# inclusion

df %>% group_by(id1, id2) %>% summarise_at(.funs = funs(mean, min, n()), .vars = vars(var1, var2))

# exclusion

df %>% group_by(id1, id2) %>% summarise_at(.funs = funs(mean, min, n()), .vars = vars(-var1, -var2))

These are very handy for quick analysis, especially for people prefer simpler coding.

Cast multiple value.var columns simultaneiously for reshaping data from Long to Wide in R

While working with R, reshaping dataframe from wide format to long format is relatively easier than the opposite. Especially when you want to reshape a dataframe to a wide format with multiple columns for value.var in dcast. Let’s look at an example dataset:

1

From v1.9.6 of data.table, we can cast multiple value.var by this syntax:

testWide <- dcast(setDT(test), formula = time ~ country, value.var = c(‘feature1′,’feature2’))

All you need is add ‘setDT’ for the dataframe and pass the list of value.var to it.

2

Adding an existing folder to github on Mac

  1. Create a new repository on GitHub. Here are the important part: don’t initialize the new repository with README, license or gitignore files.
  2. Open Terminal
  3. cd [project foler-root]
  4. Initialize the local directory as a Git Repository. $ git init
  5. Add the files to your new repository. This set the stage for them. $ git add .
  6. Commit the files staged.  $ git commit -m ‘message’
  7. Copy the URL from Github repository from the web page, something like https://github.com/&#8230;.hello_world.git
  8. Add the URL to the remote repository. $ git remote add origin URL_just_copied
  9. Check remote $ git remote -v
  10. Push the changes to GitHub. $ git push -u origin master

Now you have successfully pushed an existing project to Github, for any new changes after this step, simply do the following for update to Git.

  1. git add .
  2. git commit -m ‘new changes’
  3. git push

Some most frequently used command:

git status  – check git status

git log  – show all the log for the branch

git pull   – pull the update from Git to local

git config –global credential.helper “cache –timeout=3600”     – cache password in memory for 15 minutes

git config credential.helper store  — store it to a clear text file (.git-credentials) permanently

git config –unset credential.helper  — reset credential to ask each time

Display all the data column in Jupyter Notebook

During our data exploration, there are often times that too many data columns in the dataframe. By default, the Jupyter Notebook only display a handful of it for simplicity.

Here is the couple of ways to display all the columns:

import pandas as pd

from IPython.display import display

data = pd.read_csv(‘mydave.cvs’)

# Direclty set the options

pd.options.display.max_columns = None

display(data)

Or, you set_option method from pandas.

pd.set_option(‘display.max_columns’, None)

display(data)

To locally change the setting for an only specific cell, do the following:

with pd.option_context(‘display.max_columns’,None):

           display(data)

You can also do:

from IPython.display import HTML

HTML(data.to_html())

Are we ready for the Aug 21, 2017 solar eclipse?

The 2017 total solar eclipse is fast approaching, and hordes of sky gazers are scrambling to find a spot where they can see the shadow of the moon completely obscure the sun for a few moments on Aug. 21. Here is an illustration for the science behind it:

eclipestages

Image Credit: Rick Fienberg, TravelQuest International, and Wilderness Travel

Who can see it?

globe_inset_v3

Image Credit: NASA’s Scientific Visualization Studio

For those living in the United States, you might want to look at this gif animation image to check out when should you look out to this rare event. For people live in D.C. area, the prime time is 2:40 PM local time.

ecl2017

TIME.com has made a very cool web widget to check the prime time by a given zip code. Check here.

Be sure to wear sunglasses to protect your eyes.

 

How to direct system output to a variable in R

For people familiar with Linux/Unix/Mac command line, we all know that there are many system commands that can save our day. One of the most encountered problems is to get the number of lines/words in a large file. Here I’m talking about tens of millions record and above. There are many ways to do it: the easiest way to do it is to ‘

There are many ways to do it: the easiest way to do it is to ‘readLines’ to get all the lines and count the shape. But this will be impossible if your memory won’t allow it. But in Linux platform, you can easily do it by call ‘wc -l filename.txt’.

In R environment, you can excecute all the system command by calling  ‘system()’. In this example, system(“wc -l filename.txt”) to show the number of lines. Here is the original quesiton: how do I assign the output to a variable?

It won’t work if you just do:

varName <- system(“wc -l filename.txt”)

But here is the trick:

varName <- system(“wc -l filename.txt”, intern = TRUE)

Bingo.

For more information on the most frequently used Linux command, refer to 50 Most Commonly Used Linux Command with Example.