This post contains the resources for students taking the UCL English Linguistics MA, all in one place.

# Tag Archives: sample

# Comparing frequencies within a discrete distribution

**Note:**

This page explains how to compare observed frequencies

*f*₁ and

*f*₂ from the same distribution,

**F**= {

*f*₁,

*f*₂,…}.

To compare observed frequencies

*f*₁ and

*f*₂ from different distributions, i.e. where

**F₁**= {

*f*₁,…} and

**F₂**= {

*f*₂,…}, you need to use a chi-square or Newcombe-Wilson test.

### Introduction

In a recent study, my colleague Jill Bowie obtained a discrete frequency distribution by manually classifying cases in a small sample drawn from a large corpus.

Jill converted this distribution into a row of probabilities and calculated Wilson score intervals on each observation, to express the uncertainty associated with a small sample. She had one question, however:

**How do we know whether the proportion of one quantity is significantly greater than another?**

We might use a Newcombe-Wilson test (see Wallis 2013a), but this test assumes that we want to compare samples from independent sources. Jill’s data are drawn from the same sample, and all probabilities must sum to 1. Instead, the optimum test is a **dependent-sample** test.

### Example

A discrete distribution looks something like this: **F** = {108, 65, 6, 2}. This is the frequency data for the middle column (circled) in the following chart.

This may be converted into a probability distribution **P**, representing the proportion of examples in each category, by simply dividing by the total: **P** = {0.60, 0.36, 0.03, 0.01}, which sums to 1.

We can plot these probabilities, with Wilson score intervals, as shown below.

**So how do we know if one proportion is significantly greater than another?**

- When comparing values diachronically (horizontally), data is drawn from
**independent samples**. We may use the Newcombe-Wilson test, and employ the handy visual rule that if intervals do not overlap they must be significantly different. - However, probabilities drawn from the
**same sample**(vertically) sum to 1 — which is not the case for independent samples! There are*k−*1 degrees of freedom, where*k*is the number of classes. It turns out that the relevant significance test we need to use is an extremely basic test, but it is rarely discussed in the literature.

# A methodological progression

### (with thanks to Jill Bowie)

### Introduction

One of the most controversial arguments in corpus linguistics concerns the relationship between a ‘variationist’ paradigm comparable with lab experiments, and a traditional corpus linguistics paradigm focusing on normalised word frequencies.

Rather than see these two approaches as diametrically opposed, we propose that it is more helpful to view them as representing different points on a **methodological progression**, and to recognise that we are often forced to compromise our ideal experimental practice according to the data and tools at our disposal.

Viewing these approaches as being represented along a progression allows us to step back from any single perspective and ask ourselves how different results can be reconciled and research may be improved upon. It allows us to consider the potential value in performing more computer-aided manual annotation — always an arduous task — and where such annotation effort would be usefully focused.

The idea is sketched in the figure below.

# Capturing patterns of linguistic interaction

### Abstract Full Paper (PDF)

Numerous competing grammatical frameworks exist on paper, as algorithms and embodied in parsed corpora. However, not only is there little agreement about grammars among linguists, but there is no agreed methodology for demonstrating the benefits of one grammar over another. Consequently the status of parsed corpora or ‘treebanks’ is suspect.

The most common approach to empirically comparing frameworks is based on the reliable retrieval of individual linguistic events from an annotated corpus. However this method risks circularity, permits redundant terms to be added as a ‘solution’ and fails to reflect the broader structural decisions embodied in the grammar. In this paper we introduce a new methodology based on the ability of a grammar to reliably capture patterns of linguistic interaction along grammatical axes. Retrieving such patterns of interaction does not rely on atomic retrieval alone, does not risk redundancy and is no more circular than a conventional scientific reliance on auxiliary assumptions. It is also a valid experimental perspective in its own right.

We demonstrate our approach with a series of natural experiments. We find an interaction captured by a phrase structure analysis between attributive adjective phrases under a noun phrase with a noun head, such that the probability of adding successive adjective phrases falls. We note that a similar interaction (between adjectives preceding a noun) can also be found with a simple part-of-speech analysis alone. On the other hand, preverbal adverb phrases do not exhibit this interaction, a result anticipated in the literature, confirming our method.

Turning to cases of embedded postmodifying clauses, we find a similar fall in the additive probability of both successive clauses modifying the same NP and embedding clauses where the NP head is the most recent one. Sequential postmodification of the same head reveals a fall and then a rise in this additive probability. Reviewing cases, we argue that this result can only be explained as a natural phenomenon acting on language production which is expressed by the distribution of cases on an embedding axis, and that this is in fact empirical evidence for a grammatical structure embodying a series of speaker choices.

We conclude with a discussion of the implications of this methodology for a series of applications, including optimising and evaluating grammars, modelling case interaction, contrasting the grammar of multiple languages and language periods, and investigating the impact of psycholinguistic constraints on language production.

# Inferential statistics – and other animals

### Introduction

**Inferential statistics** is a methodology of *extrapolation* from data. It rests on a mathematical model which allows us to predict values in the population based on observations in a sample drawn from that population.

Central to this methodology is the idea of reporting not just the observation itself but also the *certainty* of that observation. In some cases we can observe the population directly and make statements about it.

- We can cite the 10 most frequent words in Shakespeare’s
*First Folio*with complete certainty (allowing for spelling variations). Such statements would simply be facts. - Similarly, we could take a corpus like ICE-GB and report that in it, there are 14,275 adverbs ending in
*-ly*out of 1,061,263 words.

Provided that* we limit the scope of our remarks to the corpus itself*, we do not need to worry about degrees of certainty because these statements are simply facts. Statements about the corpus are sometimes called **descriptive statistics** (the word *statistic* here being used in its most general sense, i.e. a number). Continue reading

# That vexed problem of choice

(with thanks to Jill Bowie and Bas Aarts)

### AbstractPaper (PDF)

A key challenge in corpus linguistics concerns the difficulty of operationalising linguistic questions in terms of *choices* made by speakers or writers. Whereas lab researchers design an experiment around a choice, comparable corpus research implies the inference of counterfactual alternates. This non-trivial requirement leads many to rely on a per million word baseline, meaning that variation separately due to *opportunity* and *choice* cannot be distinguished.

We formalise definitions of *mutual substitution* and *the true rate of alternation* as useful idealisations, recognising they may not always hold. Analysing data from a new volume on the verb phrase, we demonstrate how a focus on choices available to speakers allows researchers to factor out the effect of changing opportunities to draw conclusions about choices.

We discuss research strategies where alternates may not be easily identified, including refining baselines by eliminating forms and surveying change against multiple baselines. Finally we address three objections that have been made to this framework, that alternates are not reliably identifiable, baselines are arbitrary, and differing ecological pressures apply to different terms. Throughout we motivate our responses by evidence from current research, demonstrating that whereas the problem of identifying choices may be ‘vexed’, it represents a highly fruitful paradigm for corpus linguistics.