Adapting variance for random-text sampling

Introduction Paper (PDF)

Conventional stochastic methods based on the Binomial distribution rely on a standard model of random sampling whereby freely-varying instances of a phenomenon under study can be said to be drawn randomly and independently from an infinite population of instances.

These methods include confidence intervals and contingency tests (including multinomial tests), whether computed by Fisher’s exact method or variants of log-likelihood, χ², or the Wilson score interval (Wallis 2013). These methods are also at the core of others. The Normal approximation to the Binomial allows us to compute a notion of the variance of the distribution, and is to be found in line fitting and other generalisations.

In many empirical disciplines, samples are rarely drawn “randomly” from the population in a literal sense. Medical research tends to sample available volunteers rather than names compulsorily called up from electoral or medical records. However, provided that researchers are aware that their random sample is limited by the sampling method, and draw conclusions accordingly, such limitations are generally considered acceptable. Obtaining consent is occasionally a problematic experimental bias; actually recruiting relevant individuals is a more common problem.

However, in a number of disciplines, including corpus linguistics, samples are not drawn randomly from a population of independent instances, but instead consist of randomly-obtained contiguous subsamples. In corpus linguistics, these subsamples are drawn from coherent passages or transcribed recordings, generically termed ‘texts’. In this sampling regime, whereas any pair of instances in independent subsamples satisfy the independent-sampling requirement, pairs of instances in the same subsample are likely to be co-dependent to some degree.

To take a corpus linguistics example, a pair of grammatical clauses in the same text passage are more likely to share characteristics than a pair of clauses in two entirely independent passages. Similarly, epidemiological research often involves “cluster-based sampling”, whereby each subsample cluster is drawn from a particular location, family nexus, etc. Again, it is more likely that neighbours or family members share a characteristic under study than random individuals.

If the random-sampling assumption is undermined, a number of questions arise.

  • Are statistical methods employing this random-sample assumption simply invalid on data of this type, or do they gracefully degrade?
  • Do we have to employ very different tests, as some researchers have suggested, or can existing tests be modified in some way?
  • Can we measure the degree to which instances drawn from the same subsample are interdependent? This would help us determine both the scale of the problem and arrive at a potential solution to take this interdependence into account.
  • Would revised methods only affect the degree of certainty of an observed score (variance, confidence intervals, etc.), or might they also affect the best estimate of the observation itself (proportions or probability scores)?

Excerpt

We will employ a method related to ANOVA and F-tests, applying this method to a probabilistic rather than linear scale. This step is not taken lightly but as we shall see in section 6, it can be justified.

Consider an observation p drawn from a number of texts, t, based on n total instances. Conventionally we would assume that these n instances are randomly drawn from an infinite population, and then employ the Normal approximation to the Binomial distribution:

standard deviation s ≡ √p(1 – p)/n.
variance s² = p(1 – p)/n, and(1)

Wilson’s score interval (w⁻, w⁺)
≡ [p + zα/2²/2N ± zα/2p(1 – p)/N + zα/2²/4] / [1 + zα/2²/N].(2)

where zα/2 is the critical value of the Normal distribution for a given error level α (see Wallis 2013 for a detailed discussion). Other derivations from (1) include χ² and log-likelihood tests, least-square line-fitting, and so on. The model assumes that all n instances are randomly drawn from an infinite (or very large) population. However, we suspect that our subsamples are not equivalent to random samples, and that this sampling method will affect the result.

To investigate this question, our approach involves two stages.

First, we measure the variance of scores between text subsamples according to two different models, one that presumes that each subsample is a random sample, and one calculated from the actual distribution of subsample scores. Consider the frequency distribution of probability scores, pi, across all t texts, which is centred on the mean probability score of the subsamples, p. An example distribution of this type is illustrated by Figure 1.

Figure 1: The frequency distribution of p(CL(inter) | CL) per text, quantised to two decimal places, tends to the Binomial distribution.

Figure 1: The frequency distribution of p(CL(inter) | CL) per text, quantised to two decimal places, tends to the Binomial distribution.

subsample mean p = ∑pi / t.

If subsamples were randomly drawn from the population, it would follow from (1) that the variance could be predicted by

between-subsample variance Sss² = p(1 – p)/t.(3)

To measure the actual variance of the distribution we employ a method derived from Sheskin (1997: 7). First, note that the variance of a series of N observed scores Xi, can be obtained by

s² = ∑(XiX)² / N,

which can be rewritten as

observed between-subsample variance sss² = ∑(pip)² / t.(4′)

This formula measures the internal variance of the series, but it fails to take into account the fact that the series is a subsample from which we wish to predict the true population value. The formula for the unbiased estimate of the population variance may be obtained by

observed between-subsample variance sss² = ∑(pip)² / (t – 1),(4)

where t – 1 is the number of degrees of freedom of our set of observations. This is the formula we will use for our computations.

Second, we adjust the weight of evidence according to the degree to which these two variances (equations (3) and (4)) disagree. If the observed and predicted variance estimates coincide, then the total set of subsamples is, to all intents and purposes, a random sample from the population, and no adjustment is needed to sample variances, standard deviations, confidence intervals, tests, etc.

We can expect, however, that in most cases the actual distribution has greater spread than that predicted by the randomness assumption. In such cases, we employ the ratio of variances, Fss, as a scale factor for the number of random independent cases, n.

Gaussian variances with the same probability p are inversely proportion to the number of cases supporting them, n, i.e. s² ≡ p(1 – p)/n (cf. equation (1)). Assuming the Normal approximation to the Binomial holds for the distribution of p, we can estimate a corrected total independent sample size n’, by multiplying n by the ratio of variances for the same p.

cluster-adjustment ratio Fss = Sss² / sss², and (5)
corrected sample size n’ = nFss.

To put it another way, the ratio n’:n is the same as Sss²:sss². This ratio should be less than 1, and thus n is decreased. If we decrease n in equations (1) and (2), we obtain larger estimates of sample variance and wider confidence intervals. An adjusted n is easily generalised to contingency tests and other methods.

8. Example 4: Rate of transitive complement addition

Figure 6 plots the distribution of p with Wilson intervals across ICE-GB genre categories. The thin ‘I’-shaped error bars represent the conventional Wilson score interval for p, assuming random sampling. The thicker error bars represent the adjusted Wilson interval obtained using the probabilistically-weighted method of equation (7). These results are tabulated in Table 2 in the paper.

Figure 6: Probability of following a VP with a transitive complement clause, with Wilson score confidence intervals, before and after adjustment, over text categories of ICE-GB.

Figure 6: Probability of following a VP with a transitive complement clause, with Wilson score confidence intervals, before and after adjustment, over text categories of ICE-GB.

The figure reinforces observations we made earlier. Within a single text type, such as broadcast interviews, p has a compressed range and cannot plausibly approach 1. (Note that mean p does not exceed 0.03 in any genre.) The observed between-text distribution is smaller than that predicted by equation (3), and, armed with this information, we are able to reduce the 95% Wilson score interval for p. This degree of compression (or, to put it another way, the plausible value of max(p)) may also differ by text genre.

However, the reduction due to range-compression is offset by a countervailing tendency: pooling genres increases the variance of p. The distribution of texts across the entire corpus consists of the sum of the spoken and written distributions (means 0.0091 and 0.0137 respectively), and so on.

The Wilson interval for the mean p averaged over all of ICE-GB approximately doubles in width (Fss = 0.2509), and the intervals for spoken, dialogue, private, written and printed (marked in bold in Figure 6) also expand, albeit to lesser extents. The other intervals contract (Fss > 1), tending to generate a more consistent set of intervals over all text categories.

Contents

  1. Introduction
  2. Previous research
    2.1 Employing rank tests
    2.2 Case interaction models
  3. Adjusting the Binomial model
  4. Example 1: interrogative clause probability, direct conversations
    4.1 Alternative method: fitting
  5. Example 2: Clauses per word, direct conversations
  6. Uneven-size subsamples
  7. Example 3: Interrogative clause probability, all ICE-GB data
  8. Example 4: Rate of transitive complement addition
  9. Conclusions

Full paper (PDF)

Citation

Wallis, S.A. 2015. Adapting random-instance sampling variance estimates and Binomial models for random-text sampling. London: Survey of English Usage, UCL. http://www.ucl.ac.uk/english-usage/statspapers/recalibrating-intervals.pdf

See also

References

Sheskin, D.J. 1997. Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.

Wallis, S.A. 2013. Binomial confidence intervals and contingency tests: mathematical fundamentals and the evaluation of alternative methods. Journal of Quantitative Linguistics 20:3, 178-208 » Post

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s