Pydoop Submit User Guide

Pydoop applications are run via the pydoop submit command. To start, you will need a working Hadoop cluster. If you don’t have one available, you can bring up a single-node Hadoop cluster on your machine – see the Hadoop web site for instructions. Alternatively, the source directory contains a Dockerfile that can be used to build an image with Hadoop and Pydoop installed and (minimally) configured. Check out .travis.yml for usage hints.

If your application is contained in a single (local) file named wc.py, with an entry point called __main__ (see Writing Full-Featured Applications) you can run it as follows:

pydoop submit --upload-file-to-cache wc.py wc input output

where input (file or directory) and output (directory) are HDFS paths. Note that the output directory will not be overwritten: instead, an error will be generated if it already exists when you launch the program.

If your entry point has a different name, specify it via --entry-point.

The following table shows command line options for pydoop submit:

Short

Long

Meaning

--num-reducers

Number of reduce tasks. Specify 0 to only perform map phase

--no-override-home

Don’t set the script’s HOME directory to the $HOME in your environment. Hadoop will set it to the value of the ‘mapreduce.admin.user.home.dir’ property

--no-override-env

Use the default PATH, LD_LIBRARY_PATH and PYTHONPATH, instead of copying them from the submitting client node

--no-override-ld-path

Use the default LD_LIBRARY_PATH instead of copying it from the submitting client node

--no-override-pypath

Use the default PYTHONPATH instead of copying it from the submitting client node

--no-override-path

Use the default PATH instead of copying it from the submitting client node

--set-env

Set environment variables for the tasks. If a variable is set to ‘’, it will not be overridden by Pydoop.

-D

--job-conf

Set a Hadoop property, e.g., -D mapreduce.job.priority=high

--python-zip

Additional python zip file

--upload-file-to-cache

Upload and add this file to the distributed cache.

--upload-archive-to-cache

Upload and add this archive file to the distributed cache.

--log-level

Logging level

--job-name

name of the job

--python-program

python executable that should be used by the wrapper

--pretend

Do not actually submit a job, print the generated config settings and the command line that would be invoked

--hadoop-conf

Hadoop configuration file

--input-format

java classname of InputFormat

--disable-property-name-conversion

Do not adapt property names to the hadoop version used.

--do-not-use-java-record-reader

Disable java RecordReader

--do-not-use-java-record-writer

Disable java RecordWriter

--output-format

java classname of OutputFormat

--libjars

Additional comma-separated list of jar files

--cache-file

Add this HDFS file to the distributed cache as a file.

--cache-archive

Add this HDFS archive file to the distributed cacheas an archive.

--entry-point

Explicitly execute MODULE.ENTRY_POINT() in the launcher script.

--avro-input

Avro input mode (key, value or both)

--avro-output

Avro output mode (key, value or both)

--pstats-dir

Profile each task and store stats in this dir

--pstats-fmt

pstats filename pattern (expert use only)

--keep-wd

Don’t remove the work dir

Setting the Environment for your Program

When working on a shared cluster where you don’t have root access, you might have a lot of software installed in non-standard locations, such as your home directory. Since non-interactive ssh connections do not usually preserve your environment, you might lose some essential setting like LD_LIBRARY_PATH.

For this reason, by default pydoop submit copies some environment variables from the submitting node to the driver script that runs each task on Hadoop. If this behavior is not desired, you can disable it via the --no-override-env command line option.