Tag Archives: Linux

Run Berkeley Spark’s PySpark using Docker in a couple minutes

For those of you interested in running the BDAS Spark stack in a virtualized cluster really quickly, the fastest way is using Linux Containers controlled by Docker. Using Andre Schumacher’s fantastic Berkeley Spark on Docker scripts and tutorial, you can get yourself a virtual cluster of whatever size you’d like in a couple minutes!

However, the tutorial is Scala centric, and you will be instantly dropped into a Scala shell. I am primarily interested in using Python as my tool to do analysis and data science tasks, so we needed to do a couple more steps.

Follow Andre’s tutorial, and start up a Spark 0.8.0 cluster on top of Docker as you normally would. Here I am starting up a 6-worker cluster:

1 user@aliens:~/Documents/docker-scripts⟫ sudo deploy/deploy.sh -i amplab/spark:0.8.0 -w 6 -c
*** Starting Spark 0.8.0 ***
starting nameserver container
started nameserver container: 5093b46c4df527528cae0194a8b2849a258e314dc2e0b847c67950776b5715df
DNS host->IP file mapped: /tmp/dnsdir_10034/0hosts
NAMESERVER_IP: 172.17.0.18
waiting for nameserver to come up
starting master container
started master container: 4d431889af3c7176fa1a9ffee850c6658840a307e86ad6fbf2691e54fe8fb792
MASTER_IP: 172.17.0.19

...lots more output

We are interested in the part that tells us how to connect with SSH into the master node…

***********************************************************************
start shell via:            sudo /home/user/Documents/docker-scripts/deploy/start_shell.sh -i amplab/spark-shell:0.8.0 -n 5093b46c4df527528cae0194a8b2849a258e314dc2e0b847c67950776b5715df

visit Spark WebUI at:       http://172.17.0.19:8080/
visit Hadoop Namenode at:   http://172.17.0.19:50070
ssh into master via:        ssh -i /home/user/Documents/docker-scripts/deploy/../apache-hadoop-hdfs-precise/files/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@172.17.0.19

/data mapped:

kill master via:           sudo docker kill 4d431889af3c7176fa1a9ffee850c6658840a307e86ad6fbf2691e54fe8fb792
***********************************************************************

You can see in the second output above that you can SSH into the master using the ‘ssh -i ….’ command. However, there is a bug with permission on the id_rsa file, and SSH will not let you get into the master node.

Fix this with:

chmod 0600 docker-scripts/apache-hadoop-hdfs-precise/files/id_rsa

Great, now we can enter the master node with this command:

ssh -i docker-scripts/apache-hadoop-hdfs-precise/files/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@172.17.0.19

Of course use the IP addresses that Docker is generating for you, in the output above MASTER_IP.

So from inside the master node, we want to use Python2.7 to do our work, so go find ‘pyspark’ inside of /opt/spark-VERSION

/opt/spark-0.8.0/pyspark

We have a huge problem!

root@master:/opt/spark-0.8.0# ./pyspark
Python 2.7.3 (default, Apr 20 2012, 22:39:59)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
  File "/opt/spark-0.8.0/python/pyspark/shell.py", line 25, in 
    import pyspark
  File "/opt/spark-0.8.0/python/pyspark/__init__.py", line 41, in 
    from pyspark.context import SparkContext
  File "/opt/spark-0.8.0/python/pyspark/context.py", line 21, in 
    from threading import Lock
ImportError: No module named threading
>>> sc
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'sc' is not defined
>>>

Fix Number two:

Get out of the Python / PySpark Shell (Ctrl-D), and install python2.7:

sudo apt-get install python2.7

Great! Now run ‘pyspark’ again, and you should see it working perfectly:

root@master:/opt/spark-0.8.0# ./pyspark
Python 2.7.3 (default, Apr 20 2012, 22:39:59)
[GCC 4.6.3] on linux2
...
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 0.8.0
      /_/

Using Python version 2.7.3 (default, Apr 20 2012 22:39:59)
Spark context avaiable as sc.
>>>

>>> sc

>>>

There you go! Python 2.7 on Spark using Docker, isn’t it lovely?

Aris

Automagically GZip a file remotely before downloading it with SCP!

I don’t know about you, but since I don’t live in South Korea, I do not have near-infinite bandwidth.  If you have a file up on a Unix server that you want to download with SCP down to your server, it sure would be awesome if you could automatically compress the file on the remote server before downloading it locally, saving you space on your local disk and time on the transfer. Rsync won’t help you if you don’t have the file on your local drive and are just downloading a “fresher” version.

But wait, there’s more! What about wrapping it up into a Bash function so you can call it easily?

This is another use of GNU Parallel to do heavy lifting of compressing the file remotely, downloading the newly-compressed file (using Rsync and passwordless SSH) and then deleting the compressed file on the remote server (so no junk is left there).

function remote_gzip {
    parallel -S $1 --cleanup --return {/}.gz "gzip --best {} -c > {/}.gz" ::: $2
}

You can put this Bash function into your .bashrc or something similar, where it will always be with you. Make sure to source the file to get the function into your bash shell; “. ~/.bashrc”.

So if you have a server named remote.com and a file at location /var/logs/bigfile.log, call it like this:

 

remote_gzip remote.com /var/logs/bigfile.log

In your current working directory you will have bigfile.log.gz.

Enjoy!

-Aris