The Purpose of These Statistics
The author of these statistics created them in effort to understand
how he or the IESG performs today in some of its tasks. How long does
it take to get a document approved? Where are the bottlenecks?
The primary use of these statistics is to create awareness of what
is happening in the measured processes. The AD or people who work with
him may have their own perceptions of how the process is
working. Often the perception may not match reality. For instance,
small delays here and there add up to a considerable overall
delay.
Another potential use of these statistics is to create goals for
improvement. For instance, the author has a goal to push his overall
document approval delay under three months. The intent is to track
actual end-to-end delay from publication request to approval, rather
than set arbitrary timelines for individual steps in the process. It
is also expected that reaching these goals is realistic, given that
the starting point is known from previous measurements, and because
each AD can focus on their own situation; the size and situation
of different areas can be quite different.
Another potential use is to be able to answer questions about what
causes problems. Can a change in the number of document approvals from
the IESG be explained by a particular reason? Where does most of the
time go in my own document processing? Is waiting for last call to end
or IESG telechat date significant factor in overall processing delay?
How do the processing times between documents that did not have to be
changed and other documents differ? This will also point out possible
areas of improvement.
Finally, these numbers provide additional transparency
to the actions of the IESG and ADs.
Details of the tool's measurement mechanisms are available in the
tool
description.
|
Limitations
These statistics merely reflect what is easily measurable. They are not
applicable as a simplistic model for deciding what is efficient or ineffecient
because of the following reasons:
- The ADs have tasks much beyond mere document processing. For
instance, they are expected to guide the direction of the area. These
tasks cannot be measured.
- It is not clear that the goal is a large number of approved
documents. The goal is produce high quality documents, and the
right documents. Is an AD who producing a small number of documents
doing the right thing by pushing back on bad ideas, or is he inefficient?
The numbers do not provide an answer.
- Areas and workloads differ. An AD may have anything from few WGs to 20+
WGs and BOFs.
- There are a number of measurement difficulties. One is that there
is great variation between documents. Many ADs have problematic
documents that have taken three or more years to fix. How should such
documents be represented in the numbers? Another issue is that it
takes a while for new ADs to get up to speed, and ADs who are stepping
down typically produce a significant number of approved documents right
at the end of their term; its hard to account for systematic differences
such as these.
As a result, it is inappropriate to use these numbers to rate
different ADs or label ADs good or bad. It is also inappropriate to
focus only on these numbers and attempt to improve them while
forgetting the more important parts of the work that the ADs do.
The Actual Statistics
Are you sure you have read and understood the above limitations? If
yes, please proceed to the actual
statistics.
|