Prep Time 2012

Still not the myth I was talking about. Offensive production versus expected differential on a yards per play basis relative to opponents of opponents under similar conditions is the variable i'm ranting about. Our focuses are truly disjoint. I heard "prep time" and got carried away without really reading the post, because I'd fixated on this variable for so long before to try to quantify the disadvantage. I had to use 10 years of CPJ data to factor out artifacts of the team and examine only the offense, but I just didn't have enough data, it doesn't exist to do what I was trying to do, which is not at all what we're doing ITT. My apologies for the derailment.
I'm working on updating that one. I did that originally about a year ago. :biggthumpup:
 
Still not the myth I was talking about. Offensive production versus expected differential on a yards per play basis relative to opponents of opponents under similar conditions is the variable i'm ranting about. Our focuses are truly disjoint. I heard "prep time" and got carried away without really reading the post, because I'd fixated on this variable for so long before to try to quantify the disadvantage. I had to use 10 years of CPJ data to factor out artifacts of the team and examine only the offense, but I just didn't have enough data, it doesn't exist to do what I was trying to do, which is not at all what we're doing ITT. My apologies for the derailment.
Yeah, no, I don't think it's necessarily that. I think other coaches just get so prideful that CPJ might "out coach" them that they drive their players harder to prep vs us than vs other teams. UVA last year for instance.

That also plays well into the other stat, that teams lose more often after playing us.
 
I'm working on updating that one. I did that originally about a year ago. :biggthumpup:

You calculated per-play OPR differentials versus expected, and compared them to second order opponents under similar conditions? Color me surprised, I thought I was the only person who beleived in voodoo like that. I still don't consider the data for just CPJ at GT to be good because until last year we'd essentially been in various phases of installing the offense and personnel, without comparison to the same phases with previous CPJ teams. I had all of navy and a little bit of GSU, but it's too hard to find well formatted FCS data that far back. I don't have time to track it all down and encode it myself. You lose even more integrity there because GSU played a bunch of Div2 and NAIA schools over the years that just don't seem to keep records at all. I couldn't come up with enough data to make a statement about the offense that wasn't polluted by data specific to the team.
 
Yeah, no, I don't think it's necessarily that. I think other coaches just get so prideful that CPJ might "out coach" them that they drive their players harder to prep vs us than vs other teams. UVA last year for instance.

That also plays well into the other stat, that teams lose more often after playing us.

That's another one id like to take a good look at, because I always thought it was because of how tired we make defenses with our blocking schemes. That might show up in the stats. :lol:
 
I think they're just tired after spending all that extra effort prepping because their coach is afraid he will get showed up.
 
I am updating my look into GT offense against teams with extra time with 2011 data. Will post in a new thread.

edit: https://docs.google.com/spreadsheet/pub?key=0Ar8cqnkh36RRdENBQk1fRTYyRDFIVTdOa1Q0cXNrbGc&output=html

Against BCS opponents:
21-8 (.72) when they have 7 days or less to prepare
4-11 (.27) when they have 8 or more

Thanks.

If anyone here thinks those numbers aren't statistically significant, you're a loon. Sorry. I wish it was a myth too. But it's not.

If you take the win percentage for games with 7+ or 7- days to prepare for each season as a measurement and weigh it with the number of such games played that season, the average win percentages lie 1.5 standard deviations away from each other.

Not mind-blowingly significant especially considering the small sample size.
 
Find me another school who's got a higher differential in the same analysis.
 
Find me another school who's got a higher differential in the same analysis.

I predict over 10% of schools will have a differential the same or greater given the 1.5 standard deviation result, however performing this analysis on ~125 schools over the last 4 years is not worth my time. I would love to see the result if you did it however.
 
If you take the win percentage for games with 7+ or 7- days to prepare for each season as a measurement and weigh it with the number of such games played that season, the average win percentages lie 1.5 standard deviations away from each other.

Not mind-blowingly significant especially considering the small sample size.
standard deviation of what?

I predict over 10% of schools will have a differential the same or greater given the 1.5 standard deviation result, however performing this analysis on ~125 schools over the last 4 years is not worth my time. I would love to see the result if you did it however.
if you predict 10% of schools will have a comparable differential, then you shouldn't have to look at anything near all of the ~125.

If your 10% prediction is correct, there is about 90% chance you would find it among the first 20 tries.
 
standard deviation of what?

if you predict 10% of schools will have a comparable differential, then you shouldn't have to look at anything near all of the ~125.

If your 10% prediction is correct, there is about 90% chance you would find it among the first 20 tries.

For example, the data for the games with 7+ days to prepare:

0.5 0.5 0.5 0.5 0 0 0.2 0.2 0.2 0.2 0.2 0.25 0.25 0.25 0.25
AVG = 0.267
STDEV = 0.165

Then for 7- days to prepare:

AVG = 0.742
STDEV = 0.149

In order to obtain an uncertainty range that overlaps you would define the uncertainty as 1.5 standard deviations.
 
I predict over 10% of schools will have a differential the same or greater given the 1.5 standard deviation result, however performing this analysis on ~125 schools over the last 4 years is not worth my time. I would love to see the result if you did it however.

Would you bet me? How much?
 
For example, the data for the games with 7+ days to prepare:

0.5 0.5 0.5 0.5 0 0 0.2 0.2 0.2 0.2 0.2 0.25 0.25 0.25 0.25
AVG = 0.267
STDEV = 0.165

Then for 7- days to prepare:

AVG = 0.742
STDEV = 0.149

In order to obtain an uncertainty range that overlaps you would define the uncertainty as 1.5 standard deviations.
I still don't understand what you are doing here.

As far as I know, the way to look at this is to use chi squared test (or fisher exact test since number of cases is couple too few), which tells you the probability that the difference between your two sets are statistically significant.

Both Chi-test and Fisher Exact Test on 21-8 vs. 4-11 gives a very small chance (less than 1%) that there is no difference between the two sets, i.e. the data actually has meaning.
 
Below is what I have done before. I have updated the data to 2008-2011. The difference is 1.07 with extra prep. vs. 1.19 with no extra prep. There was a small mistake before because I didn't separate the two sets before calculation the means previously.

I have done a t-test on the set of the ratio numbers, and there is only about 10% chance there is no statistical meaning to these numbers.

I have looked at Yards per Play (YPP) for games where opponents had 7 days to prepare vs. more than 7 days to prepare.

This comparison is only for FBS opponents and we do worse against teams with more preparation. I do not know for sure if this comparison is meaningful or if it is statistically significant data. I have put in all the data in google docs (check out all 3 sheets) and shared it online here: <Link>

The stat I am actually looking at is the ratio of YPP GT got against a team to YPP that team's defense had for the whole season.

In other words the ratio I am looking at:
Ratio=
(YPP GT had in game against team X) /
(YPP team X's defense had all season)

The average of these ratios for 2008-2010:
Teams with 7 or less: 1.15
Teams with more than 7: 1.06

The same analysis done on Yards per Carry Ratio and Yards per Pass Attempt Ratio is as follows:
Teams with 7 or less: 1.37 (YPC) 1.27 (YPPAR)
Teams with more than 7: 1.33 (YPC) 1.03 (YPPAR)

It's also important to know how good the defenses are on average:
Teams with 7 or less: 5.16 yards per play
Teams with more than 7: 5.05 yards per play

The difference in defenses is not big enough to explain the ratios, but still good to know.
 
Below is what I have done before. I have updated the data to 2008-2011. The difference is 1.07 with extra prep. vs. 1.19 with no extra prep. There was a small mistake before because I didn't separate the two sets before calculation the means previously.

I have done a t-test on the set of the ratio numbers, and there is only about 10% chance there is no statistical meaning to these numbers.

IMO this is the most meaningful analysis I've seen. It shows that (as most of us would expect), we do not fare as well offensively when our opponent has extra prep time. Calculating the average top of opponents was a nice touch, too.

Still, I think the idea that this affects GT so much more than any other school is overblown. While statistically significant, I would still expect to see similar trends at other schools.
 
I still don't understand what you are doing here.

As far as I know, the way to look at this is to use chi squared test (or fisher exact test since number of cases is couple too few), which tells you the probability that the difference between your two sets are statistically significant.

Both Chi-test and Fisher Exact Test on 21-8 vs. 4-11 gives a very small chance (less than 1%) that there is no difference between the two sets, i.e. the data actually has meaning.

Confounding factor: BCS teams we play after a longer break are typically better BCS teams. We purposely schedule that way.

This could be corrected for by determining the average ranking of >=8-day teams and then randomly choosing a subset of teams from the <=7-day teams that has the same average ranking. I would not be surprised if then p > 0.05 when comparing the records.
 
Last edited:
Here is the list of >=8-day BCS games:
Clemson
UNC
UGA
LSU
Miami
Iowa
UNC
NCSU
UGA
AFA
VT
NCSU
UVA
VT

Here is a randomly chosen list of <=7-day BCS games:
Wake
Clemson
Kansas
Kansas
Miami
Maryland
VT
UVA
UNC
UVA
Miami
BC
Duke
Duke

Which list looks tougher?

I'm not saying there isn't anything to the days of preparation, but I believe the strength of BCS opponents that we play after extra days of preparation needs to be factored in.
 
As has unfortunately been proven through the last four years, prep time matters. Has anyone done a check on how many of our opponents get more than 1 week prep time this year?

edit:

I'll do it, one sec..

Off hand, I'd say prep time matters for most every team in most every game.
 
In other news. people who are in the sun 7+ hours are more likely to get a sunburn than those that have been out in it less than 7 hours.
 
Back
Top