BIOS601 AGENDA: Friday September 02, 2016
[updated August 19, 2016]
Agenda for Friday Sept 02, 2016
- Discussion of computing and statistical inference issues
in the
assignment on sampling of space (surface of the Earth) and time (w.r.t. JH activity)
answers to be handed in for ...
Q1 [not required to do the R programming and sampling for part (i): use a made-up but realistic estimate],
Q2 [again, not required to do the R programming and sampling for part (i):
use made-up but realistic statistics],
Q3,
Q4,
Q5,
Q6,
Q7
The first (general) computing issue is (if need be) to get up to speed
in the use of R. See the R links on the main course page.
If you run into problems, let JH know ASAP.
A statistical/computing issue might be how to come up
with a way to
randomly sample locations on the surface of a sphere, using
latitude and longitude co-ordinates. See the notes at the bottom of the
file containing the
2 R functions inside the Oceanography
link (on the height of the land and the depth of the ocean) inside the resources for surveys. JH thinks of the problem
by visualizing the segments of a peeled orange!
Like the others, Q6 is blend of the theoretical and the practical. Here you are asked to use
R to read in the .csv file, and to produce some summary statistics, and calculate some standard errors.
You probably have not worked out the SE [sqrt(Variance)] for a ratio, but this is a good example
of something often used in applied work. Hint: the log of a ratio is the difference of its components; the
approx. variance of a log of a positive rv is (by the Delta method) the original variance times the
square of the Jacobian or scaling factor, evaluated at/near the centre of the old scale.
Think of the variance of September temperatures in F as the variance of September temperatures in C,
multiplied by the square of the scaling factor... the scale of F is 9/5 ths larger than the scale of C.
If the scaling is not linear (eg an elastic band) use the scale factor at the centre.
Reference is made to the Finite Population Correction (FPC) for the sampling variance. This would apply in
cases where you sample n (< N) of the N members of Canadian or U.S. Senate.
It is written in slightly different version by different people, but JH tends to think of
it's approx. value as (1 - n/N). To get the proper variance (assuming in our case that the target is
just these 2 years, nothing else), the variance computed under an infinite population or
sampling with replacement assumption needs to be multiplied by this less-than-unity factor,
so that in the limit, if we sampled all n of the N, so that the FPC = 1 - n/N =0, the variance
for our (census) estimate is 0.
The form of the FPC can be derived for the binary response case using
the ratio of the hypergeometric to Binomial variance: the binomial is for samples from an infinite
population -- or a finite one but sampling with replacement-- whereas the hypergeometric is for samples
from a finite one, but without replacement.
Your other choice of method to sample the days from the scanned logs can be based on what
factors you think most affect activity, and can be used in the sampling design. There is no one
best one a priori (indeed the computerized data from 2010-2011 could be used to test out various
designs/estimators but this is not required for the exercise, which is designed just to get
you thinking about the issues.
When using R (or another random number generator or random number table) keep track
of how exactly you started the sequence (see set.seed in R).
One of the savings to think about when entering data is discarding digits... what would be the effect
if you only entered thousands of steps (rounded or truncated to an integer number of thousands) or
hundreds of steps. Can you anticipate how much 'damage' is done by such approximations?
Remarks:
Some of the conceptual and practical statistical issues raised by this assignment include the
distinction between standard deviation and
standard error; the concept of a margin of error;
when it is appropriate to use the Normal (Gaussian) approximation to the binomial distribution;
the (often under-appreciated) centrality of the
Central Limit Theorem (CLT) in
applied statistical work, not just for the sampling distribution of a
sample proportion, but also for that of a
sample mean.
Other points the exercise tries to make are that most of sampling theory involves
calculation of variances. These derivations bring you back to fundamentals, and
its good to be able to work them out from scratch rather than consult a textbook or the internet
and copy the formulae blindly without having an understanding of what the
formula should look like.
You will notice that we have already starting calling the sqrt of the variance of a STATISTIC
the STANDARD ERROR of that statistic. One key point about a standard error is that is refers
to the variability of a STATISTIC, not of individual observations. Some writers reserve the
term for cases where one uses a plug-in estimate into a variance formula. For example, they would call
sigma/sqrt(n) the standard deviation of the sample mean, but they would call
s/sqrt(n) the estimated standard deviation, or standard error for short. Bear in mind that this
terminology is not standardized across the profession.
One of the points of asking you to think about various sampling options to get at the
'census' or 100% answer is to get you to think of sampling as a measurement tool.
Usually, the fancier and costlier the measuring instrument, the better the measurement.
But we can't always afford the million-dollar answer and have to live with the hundred dollar answer.
I once encountered a high-up Air Canada executive who didn't like the fact that the Canadian long-form
portion of the census involved only a 10% sample, and he wasn's sampled. So he did not trust the
results, since not everyone was surveyed. I asked him whether when a doctor drew a blood sample from
him, he should give 100% of his blood so as to get an accurate concentration value.
The Harper government did away with a the 1 in 10 random sample that was a compulsory
component of the Canadian census,
and replaced t with a 1 in 3 sample, so that the large refusal rate bigger sample size would
still leave about the same n. What you you think of estimates based on this scheme, which has had only about a 65%
participation rate?
The participation rate in the old 1-in-10 sample was 99.999% and (despite Harper's
claims to the contrary) only a handful complained or refused.
One of the first acts of the Trudeau government was to re-instate the mandatory long-form census.
A lot of psychological or psychometric measurement also involves sampling -- of items
to use in a time-limited questionnaire or exam that
can only contain a sample of the items that might be asked about
(think of the format of the old paper-based GRE exams described in the Measurement Statistics Notes). The measurement model
observed estimate = TRUTH + error
is the same whether the error comes for sampling of items or of persons
or of time. In one case we might call it measurement error, and in another sampling error.
But it could include both!
Is it worth extracting and
entering all of the step counts in all of their digits to have an answer that has too many decimal places for what one wants?
And in so doing, we overlook other SEs -- ie statistical errors than do not decrease in magnitude
as we increase the sqrt(n)! You can probably think of some in the case of the StepCounter!
JH likes to say that besides standard error, the abbreviation SE could stand for many other types of error.
It could be SAMPLING error, or STATISTICAL error, or STUPID error. Sadly,
statistical theory is only good at quantifying sampling error, where the sqrt(n) is always
in the underside of the formula. BUT IT DOES NOT KNOW HOW TO JUDGE OTHER TYPES OF ERROR,
and SOMETIMES THESE CAN BE A LOT LARGER THAN THE SAMPLING ERROR, AND THESE NON-SAMPLING ERRORS
CANNOT BE MADE SMALLER BY INCREASING n.
INCREASING n will just make the answer MORE PECISELY WRONG