Google's R Style Guide R is a high-level programming language used primarily for statistical computing and graphics. The goal of the R Programming Style Guide is to make our R code easier to read, share, and verify. The rules below were designed in collaboration with the entire R user community at Google. Summary: R Style Rules File Names: end in .R Identifiers: variable.name (or variableName), FunctionName, kConstantName Line Length: maximum 80 characters Indentation: two spaces, no tabs Spacing Curly Braces: first on same line, last on own line else: Surround else with braces Assignment: use <-, not = Semicolons: don't use them General Layout and Ordering Commenting Guidelines: all comments begin with # followed by a space; inline comments need two spaces before the # Function Definitions and Calls Function Documentation Example Function TODO Style: TODO(username) Summary: R Language Rules Notation and Naming File Names File names should end in .R and, of course, be meaningful. Identifiers Syntax Spacing

Code School - Try R Short-refcard ProjectTemplate R-bloggers | R news & tutorials from the web R - Generalized linear Models 5 Generalized Linear Models Generalized linear models are just as easy to fit in R as ordinary linear model. In fact, they require only an additional parameter to specify the variance and link functions. 5.1 Variance and Link Families The basic tool for fitting generalized linear models is the glm function, which has the folllowing general structure: > glm(formula, family, data, weights, subset, ...) where ... stands for more esoteric options. As can be seen, each of the first five choices has an associated variance function (for binomial the binomial variance m(1-m)), and one or more choices of link functions (for binomial the logit, probit or complementary log-log). As long as you want the default link, all you have to specify is the family name. > glm( formula, family=binomial(link=probit)) The last family on the list, quasi, is there to allow fitting user-defined models by maximum quasi-likelihood. 5.2 Logistic Regression Of course the data can be downloaded directly from R: > attach(cuse)

R Tutorials--Log Linear Analysis Preliminaries Model Formulae If you have not yet read the Model Formulae tutorial, reading it before you procede might help you with parts of this one, model-formula-wise that is. A Little Theory Most of these tutorials have spared the theory in favor of just showing "how to do it" in R. We will examine a data set called "Titanic", which is a built-in data set describing the outcome of the Titanic sinking in 1912. > data(Titanic) > dimnames(Titanic) $Class [1] "1st" "2nd" "3rd" "Crew" $Sex [1] "Male" "Female" $Age [1] "Child" "Adult" $Survived [1] "No" "Yes" As you can see, there are four categorical variables crosstabulated in this table. > margin.table(Titanic) [1] 2201 ...cases in the table. Log linear analysis allows us to look for relationships among the variables in a multiway contingency table like this one. Let's look at the data collapsed over "Class" and "Age"... > margin.table(Titanic, c(2,4)) # show dimensions 2 and 4 Survived Sex No Yes Male 1364 367 Female 126 344 End of theory.

Regression Models for Count Data in R - countreg Cohen's kappa Cohen's kappa coefficient is a statistical measure of inter-rater agreement or inter-annotator agreement[1] for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation since κ takes into account the agreement occurring by chance. Some researchers[2][citation needed] have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can have the effect of underestimating agreement for a category that is also commonly used; for this reason, κ is considered an overly conservative measure of agreement. Others[3][citation needed] contest the assertion that kappa "takes into account" chance agreement. To do this effectively would require an explicit model of how chance affects rater decisions. Calculation[edit] Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories. The equation for κ is: Example[edit] Weighted kappa[edit] , and

Related: Data analysis