Deep Learning with GPU- How do we start? — A quick setup guide on Amazon ec2

Deep learning is one of the hottest buzzwords in tech and is impacting everything from health care to transportation to manufacturing, and more. Companies are turning to deep learning to solve hard problems, like speech recognition, object recognition, and machine translation.

Everything new breakthrough comes with challenges. The biggest challenge for deep learning is that it requires intensive training of the model and massive amount of matrix multiplications and other operations. A single CPU usually has no more than 12 cores and it will be a bottleneck for deep learning network development. The good thing is that all the matrix computation can be parallelized and that’s where GPU comes into rescue. A single GPU might have thousands of cores and it is a perfect solution to deep learning’s massive matrix operations. GPUs are much faster than CPUs for deep learning because they have orders of magnitude more resources dedicated to floating point operations, running specialized algorithms ensuring that their deep pipelines are always filled.

The good thing is that all the matrix computation can be parallelized and that’s where GPU comes into rescue. A single GPU might have thousands of cores and it is a perfect solution to deep learning’s massive matrix operations. GPUs are much faster than CPUs for deep learning because they have orders of magnitude more resources dedicated to floating point operations, running specialized algorithms ensuring that their deep pipelines are always filled.

GPUs are much faster than CPUs for deep learning because they have orders of magnitude more resources dedicated to floating point operations, running specialized algorithms ensuring that their deep pipelines are always filled.

Now we’re know why GPU is necessary for deep learning. Probably you’re interested in deep learning and can’t wait to do something about it. But you don’t have big GPUs on your computer. The good news is that there are public GPU serves for you to start with. Google, Amazon, OVH all have GPU servers for you to rent and the cost is very reasonable.

In this article, I’ll show you how to set up a deep learning server on Amazon ec2, p2-2xlarge GPU instance in this case. In order to set up amazon instance, here is the prerequisite software you’ll need:

  1. Python 2.7 (recommend anaconda)
  2. Cygwin with wget, vim (if on windows)
  3. Install Amazon AWS Command Line Interface (AWS CLI), for Mac

Here is the fun part:

  1. Register an Amazon ec2 account at:
  2. Go to Support –> Support Center –> Create case  (Only for the new ec2 user.)fastai.PNGType in the information in the form and ‘submit’ at the end. Wait for up to 24-48 hours for it to be activated. If you are already an ec2 user, you can skip this step.
  3. Create new user group. From console, Services –> Security, Identity & Compliance –> IAM –> Users –> Add user
  4. After created new user, add permission to the user by click the user just created.user group
  5. Obtain Access keys: Users –> Access Keys –> Create access key. Save the information.key
  6. Now we’re done with Amazon EC2 account, go to Mac Terminal or Cygwin on Windows
  7. Download set-up files from and . Change the extension to .sh since WordPress doesn’t support bash file upload
  8. Save the two shell script to your current working directory
  9. In the terminal, type: aws configure                                                                                      Type in the access key ID and Secret access key saved in step 5.
  10. bash
  11. Save the generated text (on terminal) for connecting to the server
  12. Connect to your instance: ssh -i /Users/lxxxx/.ssh/aws-key-fast-ai.pem
  13. Check your instance by typing: nvidia-smi
  14. Open Chrome Browser with URL: dl_course
  15. Now you can start to write your deep learning code in the Python Notebook.
  16. Shut down your instance in the console or you’ll pay a lot of money.

For a complete tutorial video, please check Jeremy Howard’s video here.


The settings, passwords are all saved at ~/username/.aws , ~/username/.ipython.





The convenience of subplot = True in dataframe.plot

When it comes to data analysis, there is always a saying: “one picture worths a thousand words.”. Visualization is an essential and effective way of data exploration and usually as our first step of understanding the raw data. In Python, there are a lot of visualization libraries. For python dataframe, it has plenty of built-in plotting methods: line, bar, barh, hist, box, kde, density, area, pie, scatter and hexbin.

The quickest way to visualize all the columns data in a dataframe can be achieved by simply call: df.plot().  For example:

df = pd.DataFrame({‘A’:np.arange(1,10),’B’:2*np.arange(1,10)})
df.plot(title = ‘plot all columns in one chart.’)


But a lot of times we want each feature plotted on a separate chart due to the complex of data. It will help us disentangle the dataset.

It turns out that there is a simple trick to play with in df.plot, using ‘subplot = True’.

df.plot(figsize = (8,4), subplots=True, layout = (2,1), title = ‘plot all columns in seperate chart’);


That’s it. Simple but effective. You can change the layout by playing with the layout tupple input.

Hope you find it helpful too.

All about *apply family in R

R has many *apply functions which are very helpful to simplify our code. The *apply functions are all covered in dplyr package but it is still good to know the differences and how to use it. It is just too convenient to ignore them.

First, the following Mnemonics gives you an overview of what each *apply function do in general.


  • lapply is a list apply which acts on a list or vector and returns a list.
  • sapply is a simple lapply (function defaults to returning a vector or matrix when possible)
  • vapply is a verified apply (allows the return object type to be prespecified)
  • rapply is a recursive apply for nested lists, i.e. lists within lists
  • tapply is a tagged apply where the tags identify the subsets
  • apply is generic: applies a function to a matrix’s rows or columns (or, more generally, to dimensions of an array)


For sum/mean of each row/columns, there are more optimzed function: colMeans, rowMeans, colSums, rowSums.While using apply to dataframe, it will automatically coerce it to a matrix.

# Two dimensional matrix# Two dimensional matrix
myMetric <- matrix(floor(runif(15,0,100)),5,3)
# apply min to rows
# apply min to columns

[,1] [,2] [,3]
[1,] 28 22 6
[2,] 31 75 80
[3,] 7 88 96
[4,] 15 70 27
[5,] 74 84 12 //
[1] 6 31 7 15 12 //
[1] 7 22 6 //

For list vector, it applies the function to each element in it. lapply is the workhorse under all * apply functions. The most fundamental one.

x <- list(a = runif(5,0,1), b = seq(1:10), c = seq(10:100))
lapply(x, FUN = mean)

# Result

[1] 0.4850281

[1] 5.5

[1] 46

sapply is doing the similar to lapply, it is just the output different. It simplifies the output to a vector rather than a list.

x <- list(a = runif(5,0,1), b = seq(1:10), c = seq(10:100))
sapply(x, FUN = mean)

a                 b                  c
0.2520706 5.5000000 46.0000000

vapply – similar to sapply, just speed faster.

This is a recursive apply, especially useful for a nested list structure. For example:

#Append ! to string, otherwise increment
myFun <- function(x){
if (is.character(x)){
return(x + 1)

#A nested list structure
l <- list(a = list(a1 = “Boo”, b1 = 2, c1 = “Eeek”),
b = 3, c = “Yikes”,
d = list(a2 = 1, b2 = list(a3 = “Hey”, b3 = 5)))

#Result is named vector, coerced to character

#Result is a nested list like l, with values altered
rapply(l, myFun, how = “replace”)

a.a1 a.b1 a.c1 b c d.a2 d.b2.a3 d.b2.b3
“Boo!” “3” “Eeek!” “4” “Yikes!” “2” “Hey!” “6”

[1] “break”
[1] “Boo!”

[1] 3

[1] “Eeek!”


[1] 4

[1] “Yikes!”

[1] 2

[1] “Hey!”

[1] 6

For when you want to apply a function to subsets of a vector and the subsets are defined by some other vector, usually a factor.

tapply is similar in spirit to the split-apply-combine functions that are common in R (aggregate, by, avg, ddply, etc.)

x <- 1:20
y = factor(rep(letters[1:5], each = 4))

a b c d e
10 26 42 58 74

mapply and map
For when you have several data structures (e.g. vectors, lists) and you want to apply a function to the 1st elements of each, and then the 2nd elements of each, etc., coercing the result to a vector/array as in sapply.

**Map** is a wrapper to mapply with SIMPLIFY = FALSE, so it will be guaranteed to return a list.

mapply(sum, 1:5, 1:10,1:20)
mapply(rep, 1:4, 4:1)

[1] 3 6 9 12 15 13 16 19 22 25 13 16 19 22 25 23 26 29 32 35
[1] 1 1 1 1

[1] 2 2 2

[1] 3 3

[1] 4


This post is compiled from stackoverflow’s top answers.

A better view of this is to look at the R Notebook I’ve created:

Access Amazon Redshift Database from Python

Amazon has definitely made significant gain from the cloud movement in the past decade as more and more company are ditching their own data server in favor of theirs. There is a very good reason to do that. Cheaper, faster and easy access from anywhere.

Now how do we retrieve data in Redshift and do data analysis from Python. It is very simple to do that. The information that you’ll need ahead is: usename, password, url to redshift and port number (default is 5439).

I’ll show you how to connect to Amazon Redshift using psycopg2 library. First install library ‘psycopg2’ using : pip install psycopg2.

Then use the following Python code to define your connections.

def create_conn(*args, **kwargs):

import psycopg2
config = kwargs[‘config’]
con = psycopg2.connect(dbname = config[‘dbname’], host=config[‘host’],
port = config[‘port’], user=config[‘user’],
return con
except Exception as err:

keyword= getpass(‘password’)   # type in password  or you can read from a saved json file.

config = {‘dbname’: ‘lake_one’,
‘host’:'[your host url]’,

How to use this and do fun stuff:

con = create_conn(config = config)
data = read_sql(“select *  from mydatabase.tablename;”,con, index_col = ‘date’)
data.plot(title=’My data’, grid = True,figsize=(15,5))

con.close() # close connection

Simple as that. Now it’s your turn to create fantastic analysis.

How to use customized function for any Pipe operator %>% in R

For advanced R programmer or Python (spark) machine learning engineer, you probably have heard or used at least once pipeline for your data or model work flow. The concept of pipeline is derived from Unix/Linux shell command. A pipeline is a sequence of processes chained together by their standard streams so that the output of each process (stdout) feeds directly as input (stdin) to the next one, for example: ls -l | grep key
less. Since the debut of one of the greatest R package ‘magrittr‘, pipeline has been one of my favorite thing in data engineering.

As we know, the way pipeline requires you to pass the whole output from previous command [process] to next one. Here comes a problem when you want use some basic/simple R command for just a particular column in the data object. For example, if I have a dataset ‘babynames’ and I want to round the ‘prop’ column to 3 digits. What will happen if I do this:


babynames %>%
round(‘prop’) %>%

It gives me an error:

babynames %>%
+ round(‘prop’) %>%
+ head
Error in = c(1880, 1880, 1880, 1880, 1880, 1880, :
non-numeric variable in data frame: sexname

How am I going to fix it? The solution is simple, write a customized wrapper function to let it go with the flow. Here is the solution:


myRound <- function(df,colname){
df[[colname]] <- round(df[[colname]], 3)

babynames %>%
myRound(‘prop’) %>%

Now it works. Whooray!

year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 1880 F Mary 7065 0.072
2 1880 F Anna 2604 0.027
3 1880 F Emma 2003 0.021
4 1880 F Elizabeth 1939 0.020
5 1880 F Minnie 1746 0.018
6 1880 F Margaret 1578 0.016

Why it works?

The way pipeline works are like going through a multiple-stage filter for a signal, it can only take the whole object as input instead part of it. So the wrapper function operates as a buffer function within the pipeline.


Can’t start Jupyter Notebook in macOS Sierra 10.12.5

Many people have experienced an annoying issue after updating to macOS Sierra 10.12.5 while trying to fire up the Jupiter notebook. There are two sequential ways of fixing the issue depends on your Mac environment.

The first easy fix is to copy and paste: http://localhost:8888/tree directly into your browser. If doesn’t work in Chrome, using Safari.

What I found out the more annoying issue is that the first fix simply won’t do it, and it prompts you to enter a password after pasting the link into browser. This problem can be fixed by changing the Jupiter notebook server settings. Simply by following the steps:

  1. Open Mac Terminal
  2. jupyter notebook –generate-config 

    This command generate config file under


  3. jupyter notebook password

Type in your password twice and it will save the hashed password into      ./jupyter/jupyter_notebook_config.json

After this setting, fire up your notebook and type in your previous password (not hashed value) and save it. It will not ask for password anymore.

Some helpful command to use:

jupyter –version

jupyter notebook list  : list all running notebook sesion

If you still have problem, combine two steps together and it should work.



Converting week numbers to dates

This is the post excerpt.

While working with time series dataset, sometimes you’ll only get the date as week number of its residing year. This articles presents easy way to convert it to datetime tuple and datetime object in Python.

Easy way to do it is using strptime from the datetime module. Example:

import datetime

week = 12

year = 2017

atime = datetime.datetime.strptime(‘{} {} 1’.format(year,week), ‘%Y %W %w’).timetuple()

This will return ‘time.struct_time(tm_year=2017, tm_mon=3, tm_mday=20, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=79, tm_isdst=-1)’

In this command, the symbols used to parse the date string are %Y, %W and %w. The whole symbol table can be found at: strftime symbol table
%Y: represents four-digits year
%W: Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0.
%w: Weekday as a decimal number, where 0 is Sunday and 6 is Saturday.

To convert this to a object: