Display all the data column in Jupyter Notebook

During our data exploration, there are often times that too many data columns in the dataframe. By default, the Jupyter Notebook only display a handful of it for simplicity.

Here is the couple of ways to display all the columns:

import pandas as pd

from IPython.display import display

data = pd.read_csv(‘mydave.cvs’)

# Direclty set the options

pd.options.display.max_columns = None


Or, you set_option method from pandas.

pd.set_option(‘display.max_columns’, None)


To locally change the setting for an only specific cell, do the following:

with pd.option_context(‘display.max_columns’,None):


You can also do:

from IPython.display import HTML


Are we ready for the Aug 21, 2017 solar eclipse?

The 2017 total solar eclipse is fast approaching, and hordes of sky gazers are scrambling to find a spot where they can see the shadow of the moon completely obscure the sun for a few moments on Aug. 21. Here is an illustration for the science behind it:


Image Credit: Rick Fienberg, TravelQuest International, and Wilderness Travel

Who can see it?


Image Credit: NASA’s Scientific Visualization Studio

For those living in the United States, you might want to look at this gif animation image to check out when should you look out to this rare event. For people live in D.C. area, the prime time is 2:40 PM local time.


TIME.com has made a very cool web widget to check the prime time by a given zip code. Check here.

Be sure to wear sunglasses to protect your eyes.


How to direct system output to a variable in R

For people familiar with Linux/Unix/Mac command line, we all know that there are many system commands that can save our day. One of the most encountered problems is to get the number of lines/words in a large file. Here I’m talking about tens of millions record and above. There are many ways to do it: the easiest way to do it is to ‘

There are many ways to do it: the easiest way to do it is to ‘readLines’ to get all the lines and count the shape. But this will be impossible if your memory won’t allow it. But in Linux platform, you can easily do it by call ‘wc -l filename.txt’.

In R environment, you can excecute all the system command by calling  ‘system()’. In this example, system(“wc -l filename.txt”) to show the number of lines. Here is the original quesiton: how do I assign the output to a variable?

It won’t work if you just do:

varName <- system(“wc -l filename.txt”)

But here is the trick:

varName <- system(“wc -l filename.txt”, intern = TRUE)


For more information on the most frequently used Linux command, refer to 50 Most Commonly Used Linux Command with Example.


Using R in Jupyter Notebook

R has started to gain momentum in data science due to its easy-to-use and full of statistic packages. For longtime Python user, I want to run some R commands within Jupyter for pratical reasons, like some collaborators are using R for some tasks or just convenience. This article will show you how to do it.

  • Setup environment

Install R essentials in your current environment:

conda install -c r r-essentials

These ‘essentials’ include the packages dplyr, shiny, ggplot2, tidyr, caret and nnet. 

You can also create a new environment just for the R essentials:

conda create -n my-r-env -c r r-essentials

Now you’re all set to work with R in Jupyter.

How about install new packages in R for my usage in Jupyter?

There are two ways of doing it: 1. build a Conda R package by running:

conda skeleton cran xxx conda build r-xxx/

Or you can install the package from inside of R via install.packages() or devtools::install_github. But with one change: change the destination to conda R library.


  • Into good hands

THe interactivity comes mainly from the so-called “magic commands” which allows you to switch from Python to command line instructions (like ls, cat etc) or to write code in other languages such as R, Scala, Julia, …

After open Jupiter notebook, you should be able to see R in the console:


To switch from Python to R, first download the following pacakge:

%load_ext rpy2.ipython

After that, start to use R with the %R magic command.

# Hide warnings if there are any
import warnings
# Load in the r magic
%load_ext rpy2.ipython
# We need ggplot2
%R require(ggplot2)
# Load in the pandas library
import pandas as pd
# Make a pandas DataFrame
df = pd.DataFrame({‘Alphabet’: [‘a’, ‘b’, ‘c’, ‘d’,’e’, ‘f’, ‘g’, ‘h’,’i’],
‘A’: [4, 3, 5, 2, 1, 7, 7, 5, 9],
‘B’: [0, 4, 3, 6, 7, 10,11, 9, 13],
‘C’: [1, 2, 3, 1, 2, 3, 1, 2, 3]})
# Take the name of input variable df and assign it to an R variable of the same name
%%R -i df
# Plot the DataFrame df
ggplot(data=df) + geom_point(aes(x=A, y=B, color=C))

Automate tabular financial datatable into vectorized sequential data

A lot of times, we receive time-related data in a table format and we want convert it into a simple data format with one column of datetime and the other as value. See this sample table:1

Now we want to convert this dataset into another format which can be easier to visulize and convert to other data structure like xts or timeSeries object. The converted data will be like:


Let’s look at a sample unemployment rate from Depart of labor.

sampleData <- read.csv(‘table_date.csv’)


Method in R, there are two common ways to do it, first:

tableDataFlat <- as.vector(t(sampleData[1:nrow(sampleData),2:ncol(sampleData)]))
dates <- seq.Date(as.Date(‘2005-01-01’),as.Date(‘2017-12-01′),’month’)
newTS <- data.frame(dates=dates,value=tableDataFlat)


the second way in R:

tableDataFlat <- c(t(as.matrix(sampleData[1:nrow(sampleData),2:ncol(sampleData)])))
newTS <- data.frame(dates=dates,value=tableDataFlat)

Now we can do visualization and analysis more conveniently.



Method in Python:

In python, it is even more simple. Flatten the data matrix by using:

import numpy as np
import pandas as pd
df = pd.read_csv(‘table_date.csv’)
data = df.values
data_flat = data.flatten()
dates = pd.date_range(start = ‘2005-01-01’, end = ‘2017-12-01′,freq=’M’)
new_df = pd.Dataframe({date:dates,value:data_flat})

Python traps you should know

Like every language, there are some easy to overlook traps when writing Python programs. Some of the traps are hidden and can cause big problems or errors for your program. Here are some of the most common traps a good Python programmer should be aware:

    •  1. a mutable object used as the default parameter

Like all the other languages, Python provides default parameters for functions which are great for making thing easier. However, things can become unpleasant if you have put a mutable object in the function as the default value for a parameter. Let’s look at an example:


A surprise? ! The root cause is that everything is an object in Python, even function and default parameter is just an attribute of the function. Default parameter values are evaluated when the function definition is executed.

Another more obvious example:


How to fix it?

According to Python document: A way around this is to use None as the default, and explicitly test for it in the body of the function.


  • 2.  x += y vs x = x+y

Generally speaking, the two are equivalent. Let’s look at the example:


As we can see, when using +=, it returns the same id.  In the first example (53,54), x points to a new object while the latter one (55,56) it modifies its value at the current location.

  • 3. Majic parathesis ()

In Python, () can represent a tuple data structure which is immutable.


What if you only have one element in the tuple:


Majic, it becomes an integer instead of a tuple. The right thing to do is this:


  • 4. Generated element is a list of list

This is like a 2-D array. Or generating a list with mutable element in it. Sounds very easy:


By adding a value 10 into the first element in the list, we populated all the elements with the same value. Interesting, hmmm? That’s not what I want!!

The reason is still the same: mutable object within the list and they all point to the same object. The right syntax is:

As seen above, there are many traps while using Python and definitely you should be aware of it.