R for Ecologists

R is exceptional statistical software for ecological analysis as it includes a broad range of analyses employed in ecological analysis, as well as numerous routines for exploratory data analysis (EDA). Technically, the language is called S, and R is the open source implementation available for many systems for free. In these pages, however, I will refer to the language as R to simplify the text. Previous versions of this web page attempted to cover S-Plus as well as R. I no longer have S-Plus available to me and work exclusively in R. This is by choice and in my view an advantage. The older version of this page which covered both is available here: R_S_ecology.html, but will no longer be maintained.

Variables and Types

Like most programming languages, R allows users to create variables, which are essentially named computer memory. For example, you may store the number of species in a sample in a variable. Variables are identified by a name assigned when they are created. Names should be unique, and long enough to clearly identify the contents of the variable. You may work with the same data weeks or months later, and variable names like x or data are not very helpful. Names can consist of letters, numbers, and the character"." or "_". They may not start with a number, or include the character "$" or any arithmetic symbols as these have special meaning in R. In very old versions of R the underscore had a different meaning and was not allowed in variable names, but that time is long past and the use of underscores to separate words in variable names is now common.

Variables are assigned a value in an assignment statement, which in R has the variable name to the left of a left-pointing arrow (typed with the "less than" followed by a "dash") with the value behind the arrow. For example,

number.species <- 137

Recent versions of R allow you to use the = sign for an assignment, i.e.

number.species = 137

but I will stick with the older, more elegant arrow.

R allows the creation of variables which contain numeric values (both integers and floating point or real numbers), characters, or special characters interpreted as "logical" values. For example

x <- 1.2345 small.value <- 1.0e-10 species.name <- 'Pinus contorta' conifer <- TRUE
Notice that real or floating point numbers can be entered with just a decimal point, or in exponential notation, where 1.0e-10 means 1.0-10. Notice also that character variables, called "strings" should be entered in quotes (single or double, it doesn't matter as long as they match). Finally, note that the word TRUE is NOT surrounded by quotes. This is not the WORD TRUE, but rather the VALUE TRUE. Logical variables can only take the values TRUE or FALSE.

Unlike many programming languages (e.g. FORTRAN or C) you do not have to tell R what kind of value (integer, real, or character) a variable will contain; it can tell when the variable is assigned. R will only allow the appropriate operations to be performed on a variable. For example

species.name + 37 Error in species.name + 37 : non-numeric argument to binary operator
R did not allow me to add 37 to species.name because species.name was a character variable.

Data Structures

R is a 4th generation language, meaning that it includes high-level routines for working with data structures, rather than requiring extensive programming by the analyst. In R there are 4 primary data structures we will use repeatedly.
  1. vectors --- vectors are one-dimensional ordered sets composed of a single data type. Data types include integers, real numbers, and strings (character variables)
  2. matrices --- matrices are two dimensional ordered sets composed of a single data type, equivalent to the concept of matrix in linear algebra.
  3. data frames --- data frames are one to multi-dimensional sets, and can be composed of different data types (although all data in a single column must be of the same type). In addition, each column and row in a data frame may be given a label or name to identify it. Data frames are equivalent to a flat file database, and similar to spreadsheets. Accordingly, we often refer to specific columns in a data frame as "fields."
  4. lists --- lists are compound objects of associated data. Like data frames, they need not contain only a single data type, but can include strings (character variables), numeric variables, and even such things as matrices and data frames. In contrast to data frames, lists items do not have a row-column structure, and items need not be the same length; some can be a single values, and others a matrix. You can think of a list as a named box to put related objects into. It's a little hard to imagine how lists operate in the abstract, but you will see that many of the results of analyses in R are returned as lists, so we'll introduce them as necessary that way.

Vectors and Matrices

Vectors, matrices, data frames and lists are identified by a name given the data structure at the time it is created. Names should be unique, and long enough to clearly identify the contents of the structure. Names can consist of letters, numbers, and the character ".". They may not start with a number, or include the character "$" or any arithmetic symbols as these have special meaning in R.

Vectors are often read in as data or produced as the result of analysis, but you can produce one simply using the c() function, which stands for "combine." For example

demo.vector <- c(1,4,2,6,12)

produces a vector of length 5 with the values 1, 4, 2, 6, 12.

Individual items within a vector or matrix can be identified by subscript (numbered 1 - n), which is indicated by a number (or numeric variable) within square brackets. For example, if the number of species per plot is stored in a vector spc.plt, then

equals the number of species in plot 37

Matrices are specified in the order "row, column", so that

equals row 23, column 48 in matrix veg

Individual rows or columns within a matrix can be referred to by implied subscript, where the the value of the desired row or column is specified, but other values are omitted. For example,


represents the third column of matrix veg, as the row number before the comma was omitted. Similarly,


represents row 5, as the column after the comma was omitted. In addition, a number of specialized subscripts can be used.

veg[] # = all rows and columns of matrix veg spcplt[a:b] # = spcplt[a] through spcplt[b] spcplt[-a] # = all of vector spcplt except spcplt[a] veg[a:b,c:d] # = a submatrix of veg from row a to b and column c to d

It's even possible to specify specific subsets of rows and columns that are not adjacent.

spcplt[c(1,7,10),c(3,6,12)] # = a submatrix consisting # of rows 1,7 and 10, and columns # 3, 6, as 12 from matrix spcplt.

Data Frames

Data frames can be accessed exactly as can matrices, but can also be accessed by data frame and column or field name, without knowing the column number for a specific data item. For example, in the Bryce dataset, there is a column labeled "elev" that holds the elevation of each sample plot. This column can be accessed as bryce$elev, where "bryce" is the name of the data frame, "elev" is the name of the field or column of interest, and the "$" is a separator to distinguish data frame from field.

If you are routinely working with one or a few data frames, R can be told the name(s) of the data frames in an "attach " statement, and the data frame name and separator can be omitted. For example, if we give the command


we can specify the field "elev" simply as "elev" rather than "bryce$elev." This is more concise notation, but has two serious considerations:

Data frames are extraordinarily useful in R. You can drill down and find out a little more here.


As noted above, a list is a compound object composed of associated data. Items within a list are generally referred to as components. Similar to data frames, components in a list can be given a name, and the component can be specified by name at any time. In addition, components can be specified by their position in the list, similar to a subscript in a vector. However, in contrast to a vector, lists components are specified in double [[ ]] delimiters. We will ultimately find it quite handy to create our own lists, but for the first few labs we will just see them as results from analyses, so we'll take them as they come and demonstrate their properties by example.

For the time being, I'll give a very simple example. Using the spc.plt vector above, and the names of the veg data frame.

list.demo <- list(species_per_plot=spc.plt,species_names=names(veg))

50001 50002 50003 50004 50005 50006 50007 50008 50009 50010 50011 50012 
    8    14    12     8    16    11    12     8     8    16    19    18 
50013 50014 50015 50016 50017 50018 50019 50020 50021 50022 50023 50024 
    9    14    19     8    10    12    13     9    15     6    10    18 
50025 50026 50027 50028 50029 50030 50031 50032 50033 50034 50035 50036 
   16    12    19    13     6    13    19    10    15    16    13    16 
    .     .     .     .     .     .     .     .     .     .     .     .
    .     .     .     .     .     .     .     .     .     .     .     .
    .     .     .     .     .     .     .     .     .     .     .     .
50157 50158 50159 50160 50161 50162 50163 50164 50165 50166 50167 50168 
    5     5     6     7     4    10    12     3    12     4     5    15 
50169 50170 50171 50172 
    8     8     7    11 

  [1] "junost"   "ameuta"   "arcpat"   "arttri"   "atrcan"   "berfre"  
  [7] "ceamar"   "cerled"   "cermon"   "chrdep"   "chrnau"   "chrpar"  
 [13] "chrvis"   "eurlan"   "juncom"   "pacmyr"   "pruvir"   "purtri"  
         .          .          .          .          .          .
         .          .          .          .          .          .
         .          .          .          .          .          .
[157] "sclwhi"   "senmul"   "sphcoc"   "stapin"   "steten"   "strcor"  
[163] "swerad"   "taroff"   "thafen"   "towmin"   "tradub"   "valacu"  
[169] "vicame"  
Notice how I assigned a name to the list components before the equal sign, and the component itself following the equal sign. In this case, the first component species_per_plot has 160 numbers (each with the plot identifier attached), and the second item has 174 strings.

R Vector and Matrix Operators

Because R is a 4th generation language, it is often possible to perform fairly sophisticated routines with little programming. The key is to recognize that R operates best on vectors, matrices, or data frames, and to capitalize on that. A large number of functions exists for manipulating vectors, and by extension, matrices. For example, if veg is a vegetation matrix of 100 sample plots and 200 species (plots as rows and species as columns), we can perform the following:

x <- max(veg[,3]) # assigns the maximum value of species 3 # among all plots to x y <- sum(veg[,5]) # assigns the sum of species 5 abundance in all # plots to y logveg <- log(veg+1) # creates a new matrix called "logveg" with all # values the # log of the respective values in veg # (+1 to avoid log(0) which is undefined)
In addition, R supports logical subscripts, where the subscript is applied whenever the logical function is true. Logical operators include:

For example

q <- sum(veg[,8]>10) # assigns q the number of plots where the # abundance of species 8 is greater than 10 # (veg[,8]>0 is evaluated as 1 (true) or # 0 (false), so that the sum is of 0's # and 1's). r <- sum(veg[,8][veg[,8]>10]) # assigns r the sum of the abundance for # species 8 in plots where species 8 has # abundance greater than 10 r <- sum(veg[veg[,8]>10,8]) # more concisely deep14 <- max(veg[,14][soil=='deep']) # assigns the maximum abundance for # species 14 on plots with deep soils deep14 <- max(veg[soil=='deep',14]) # more concisely

Missing Vlaues

A final special case is of special note. Missing values in a vector or matrix are always a problem in ecological data sets. Sometimes it is best simply to remove samples with missing data, but often only one or a few values are missing, and it's best to keep the sample in the matrix with a suitable missing value code. We'll discuss missing value codes in more detail in the next section, but for now lets assume that we have missing values in a vector. To use all of the vector EXCEPT the missing value, use


That's complicated enough to merit some discussion. The R function to identify a missing value is

is.na( )

so that to say all of a vector except missing values, we set a logical test to be true when values are not missing. Since the R operator for "not" is !, the correct test is

!is.na( )

and to specify which vector we're testing for missing value, we put the vector in parentheses as follows:


Accordingly, the full expression is


While the symbol for a missing value in a vector or matrix is NA, using


will NOT work.

We can use the missing value test on any vector as necessary. For example, the vector of elevations, except where the number of species per plot is missing, is


This use of missing values is critical to R because all operations on vectors or matrices must have the same number of elements. So, if there are missing values in any field we're using in a calculation, the same record (row) must be omitted from all the other fields as well. One approach to working with missing values is to create a mask which you can read more about here In a later lab I'll demonstrate how to create a "mask" that we can use to simplify working with vectors or matrices with missing values.

Row or Column Operations on a Matrix

Vector operators can be applied to every row or column of a matrix to produce a vector with the apply command. For example:

spcmax <- apply(veg,2,max) creates a vector "spcmax" with the maximum value for each species in its respective position. The apply operator is employed as:

apply("matrix name",1(rowwise) or 2(columnwise),vector operator)

so that

pltsum <- apply(veg,1,sum)

creates a vector of total species abundance in each plot. The vector is as long as the number of rows in matrix veg. If the function to be applied doesn't exist, it can be created on the fly as follows:

pltspc <- apply(veg,2,function(x){sum(x>0)})

where function(x){sum(x>0)}) sums the number of plots where species x is greater than 0, and x is assigned to each column (species) in turn,

Remembering that R works directly on matrices and vectors we can simplify the apply() example above as simply

pltspc <- apply(veg>0,2,sum)

where the veg>0 converts the veg matrix to a matrix of TRUE and FALSE, and the sum() function treats TRUE as 1 and FALSE as 0.

Triangular Matrices

Often in community ecology we work with symmetric matrices (e.g. similarity, dissimilarity, of distance matrices). These matrices take up extra space (since the value of the diagonal is known by definition, and since every other value is stored twice (matrix[x,y]=matrix[y,x]). We can save space by only storing one triangle of the matrix. In addition, some analyses require a vector argument, rather than a matrix, and it's convenient to convert the triangular matrix to a vector. This can be done as follows:

triang <- matrix[row(matrix) > col(matrix)]

Getting Data Into R

Getting data into any program is often the hardest part about using the program. For R, this is generally not true, as long as the data are reasonably formatted. The R Development Core Team has developed a special manual to cover the ins and outs of getting data into and out of R. It's available as a PDF or HTML at http://cran.r-project.org.

The easiest way is to format the data in columns, with column headings, and blanks or tabs between. For example:

plot elev aspect slope text
   1 1300   240   30   loam
   2 1640   170   20   clay.loam
   3 1840    NA   24   silty.clay.loam
   .  .      .     .     .
   .  .      .     .     .
   .  .      .     .     .
 100 1730    70   15    sandy.loam

The columns do not need to be straight, but multi-word variables like "clay loam" need to be connected or put in quotes. The R convention (but it is just a convention) is to connect with a period, as shown above. It CANNOT be connected with "$". Recent versions of R also allow connections with "_". The above file (if named "site.dat" for instance) could be read with the read.table command as follows:

site <- read.table('site.dat',header=TRUE,row.names=1)

The file must be in y9our working directory, or the read.table() function will fail to find the file. Windows users often prefer

size <- read.table(file.choose(),header=TRUE,row.names=1)

where the file.choose() function pops up a file chooser to find the file. RStudio users have yet another way to specify the file of interest.

The resulting data frame would be named "site", and the columns would be named exactly as in the data file. The row.names=1 tells R that the first column is the sample identifier, and not data. In the absence of that specifier, R would assign consecutive integer sample IDs. That seems satisfactory, but it is much easier to ensure that your data in different file and data.frames match up if you make sure to employ the actual sample IDs from your data sheets.

One somewhat controversial issue with read,table() concerns the use of argument stringsAsFactors. If stringsAsFactors is TRUE (the default for read.table()) then columns of non-numeric data are read in as factors. If your data are clean, this is probably what you want. If yy intend to do some editing of the data in R after reading it in, factors are a nuisance. Instead, specify

and convert the data into a factor after you have edited it with the factor() function.

Note that the value for aspect in the third plot is NA. This is a missing value code, and will cause R to treat that value as missing, rather than as a code "NA". It's possible to use other codes as missing values if you specify them in the read.table command. For example, suppose in your data set you used -999 as the missing value code. To tell R to set -999 to missing, add the na.strings= argument as follows:

site <- read.table('site.dat',header=TRUE,row.names=1,na.strings="-999")

Alternatively, data can be organized as in traditional spreadsheet "csv" comma delimited files, as follows:
. .  .  .  .
. .  .  .  .
. .  .  .  .

In which case it would be read:

site <- read.table('site.dat',header=TRUE,row.names=1,sep=",")

to tell R that the values were separated by commas. Alternatively, you can use

site <- read.csv('site.dat',header=TRUE)

to read the file, as read.csv() calls read.table() with the appropriate parameters as defaults.

In cases where column headings are absent, the file can be read with header=FALSE and names can be entered separately with the names command. For example:

names(site) <- c("plot","elev","aspect","slope","text")

Row names (such as plot IDs) can also added if desired, using the row.names() function in a similar way.

The beauty of the read.table() function is the way it handles variables. If any value in a column is alphabetic, it treats the column as composed of "factors," or categorical variables. There is NEVER a reason to convert categorical variables to integer or numeric codes. However, if you already have categorical variables coded as integers, you can explain that to R with the factor() function after you read the data in.

This turns out to be a common enough problem to deserve some discussion. Let's say that you have a data frame (called site), with a column for soil parent material (called pm), and 1=granite and 2 = limestone. R will think that parent material is a quantitative vector, and can be added and subtracted for example. Worse, you have to remember forever that 1=granite and 2=limestone. The correct thing to do is to convert the data.

site$pm[site$pm==1] <- 'granite' site$pm[site$pm==2] <- 'limestone' site$pm <- factor(site$pm)

The first two lines do a substitution using a logical subscript (which we discussed above). The third line converts the resulting vector to a factor. If the values had been 'granite' and 'limestone' all along R would have known that the column was a factor, but when you convert a field from one type to another you need to tell R.

I don't discuss loading R packages until later in this file, but it's worth noting here that if you have loaded package 'foreign' there are additional useful file reading functions. One particularly useful function is read.dbf() which allows you to read DBase (or XBase) files directly. There are also functions for importing data from SAS, SPSS, Systat, and other software packages. Finally, it is possible, although more difficult, to read Excel .xls files. Excel is exceptionally sloppy about data formatting and storage, and the best advice is simply not to attempt to read .xls files. Rather, using Excel (or LibreOfficeCalc) export the spreadsheet to a .csv file, and use read.csv(). This path allows you to edit the data in the .csv file before reading it in, and avoids a huge number of issues later on.

Plotting in R

R has a powerful graphics capability that is much of the appeal to using the system. Many of the analyses have special plotting capabilities that allow you to plot results without storing multiple intermediate products. (R likes to point out that it is "object oriented", and that this object orientation is what allows the generality of its plotting routines. While that is generally true, the SYNTAX of R is more appropriately viewed as functional, rather than object oriented, and we will concern ourselves largely with syntax, rather than implementation). R supports a fairly broad range of graphic devices in addition to excellent on-screen plotting. Reflecting its origins on unix computers, it is quite good at Postscript output, but also includes other formats. The devices available to you for plotting will depend to some extent on your operating system (Windows versus MaxOS versus unix/linux).

Graphics Window

If you give R a plotting command without first opening a device, a window will pop up automatically to contain the plot. This plotting area is usually a convenient size for working, and can be resized with the mouse to almost any size. Normally, this is convenient and sufficient. Sometimes, however, we want absolute control over the aspect ratio of the plot, so that 100 units on the X axis is exactly the same size as 100 units on the Y axis. There is a small number of ways to ensure that the plotting is "square", but all of them assume that the plotting window has not been re-sized with the mouse. Accordingly. it is sometimes important to know how to create a plotting window of a specific size.

In Windows, the graphics window is controlled by the windows() command; in unix/linux, the X11 window is controlled by the x11() function. The size of the window is specified in inches as arguments to the function. For example, to get a window 8 inches wide by 6 inches tall

x11(height=6,width=8) # linux
windows(height=6,width=8) # windows


quartz(height=6,width=8) # Mac

This is simple, and you can even control the location if you want with xpos and ypos arguments. You can also move the window with your mouse. As long as you don't resize it you are fine.

Other Devices

The list of other devices you can plot to also depends a little on the operating system. In general R includes postscript, pdf, pictex, and xfig as vector devices, and png and jpeg as raster (pixel) devices.

Simply type


to get a list of available devices and their names (note the capital D on Devices). Each of the devices has options that can be set to control plot size, orientation (landscape or portrait), font size, etc.

It is tempting in Windows to save a plotted graph to file by right clicking on it and specifying a format and name to save under. Do not do this. These files are really ugly when included in a document. Take the time to open an appropriate device (try pdf for vectors or png for raster (image) plots) and replot the figure. It's definitely worth the time and effort. As I will show below you can save all the plottting commands in a function, edit it until it's perfect, and then plot to any device.


While R is an expansive language with a large number of routines already included, it doesn't include everything, and has several specific areas of omission with respect to community ecology (e.g. no CCA). Fortunately, the core routines are easily augmented with additional user-written routines which can be loaded into your copy of R. These routines are usually provided in what R calls a "package," which is a package with the routine itself (which may be partially implemented in FORTRAN or C, as well as R), help files, often test data, and other items as necessary. Accordingly, it's necessary to know how to load packages to make the most of R.

Installing Packages or Libraries in R

The best repository for R packages is CRAN (Comprehensive R Archive Network) at http://cran.r-project.org/. CRAN lists all the available packages alphabetically. It's important to distinguish between INSTALLING a package (which puts a copy on your computer) and LOADING a package, which loads a previously installed package into your running copy of R.

If your machine is on the internet, R has routines available to automatically install or update packages from CRAN.

Libraries and Packages for Vegetation Ecology

At present, there are many packages available specifically for vegetation ecology. vegan from Jari Oksanen and others, is widely used and commonly seen in published scientific research. labdsv from myself is also fairly commonly used. fso, optpart and coenoflex have narrower, more specific applications. ade4 from St├ęphane Dray and others has a wide array of functions and analyses, and has been extended to adehabitat by Clement Calange and others that adds many functions useful in analysis of wildlife data.

All of these packages are available at CRAN http://cran.r-project.org/. Among them they provide improved PCA, PCO, NMDS, CA, CCA, FSO, DECORANA, and a number of other utilities. We will make extensive use of them in subsequent labs. CRAN also features what are called "Task Views" that group and annotate packages for certain disciplines. For ecology, check out the "Environmetrics" task view for more available packages.

On With The Good Stuff

This has been a trivial introduction to an expansive statistical language, but my intention is to bring this power to vegetation ecologists, and this is more easily done by example than continued abstract presentation. Accordingly, further insights into R will be included in specific exercises as appropriate. Begin with Lab 1