2. Background

TTI logo

This chapter contains background information on the Mobility Monitoring Program, and compares and contrasts this Program with other national performance monitoring programs. This chapter also highlights several examples of state and local performance monitoring programs.

National Performance Monitoring Programs

There are three programs at a national scale that attempt to measure city-level traffic congestion, and two of these three programs also track travel reliability. These programs are as follows:

  • Mobility Monitoring Program — The main subject of this report, this program uses archived traffic detector data to monitor traffic congestion and travel reliability in nearly 30 cities. This program is sponsored by FHWA and supported by the Texas Transportation Institute and Cambridge Systematics, Inc.
  • Urban Congestion Reporting Program — This program gathers current traveler information reports from websites, archives the data, and provides monthly reports on traffic congestion and reliability in about 10 cities.1 This program is sponsored by FHWA and supported by Mitretek.
  • Urban Mobility Report — This effort uses aggregate data from FHWA’s Highway Performance Monitoring System (HPMS) to produce an annual report on traffic congestion and its impacts (wasted time and fuel and their costs) in the 85 largest cities in the United States.2 The study’s annual report is given extensive coverage in the media, and is sponsored by the American Road and Transportation Builders Association, the American Public Transportation Association, and the Texas Transportation Institute.

1 Wunderlich, K., S. Jung, J. Larkin, and A. Toppen. Urban Congestion Reporting (UCR) Methodology and Findings: Year 1. Draft Report, December 2003.
2 Schrank, D. and T. Lomax. 2004 Urban Mobility Report. Texas Transportation Institute, September 2004, available at http://mobility.tamu.edu/ums/.

The following sections provide an overview of these programs. Various elements of these three activities are compared in Table 1.

Table 1. Key Features of National Performance Monitoring Programs
Feature National Performance Monitoring Program
Urban Mobility Report Urban Congestion Report Mobility Monitoring Program
Number of cities in 2004 85 10 30
Expected cities in 2005 85 About 15 About 35
Years available 1982 to current 2002 to current 2000 to current
Source of data HPMS (AADT, number of lanes, ITS deployments) Travel times from websites (combination of reported travel times and TMC data) Archived direct measurements of speeds, volumes, and travel times
Reliability measured? No Yes Yes
Events monitored? No Incidents and weather (work zones planned) No, but weather and incident data planned
Geographic coverage of data All roadways in urbanized area Covered highways
(mostly instrumented freeways)
Instrumented freeways
Temporal coverage of data Annual averages Weekday (from 5:30 a.m. to 8:30 p.m.) Continuous (24 hours per day, 365 days per year)
Geographic reporting (analysis) scale Areawide Areawide Areawide and directional routes
Temporal reporting (analysis) scale Average annual and total statistics Peak period Weekend/weekday; peak and off-peak periods
Analysis timeframe Annual Monthly Annual; monthly for some cities
Time lag for reporting 18 months 10 working days 15 days for monthly reports; 6-9 months for annual report

Mobility Monitoring Program

The Mobility Monitoring Program (MMP) calculates route and system performance measures based on data collected and archived by traffic management centers. These data are detailed direct measurements from roadway-based sensors installed for operational purposes. Data from spot locations (volumes and speeds) are used as well as travel times from probe vehicles where available. For each participating city, the program team develops congestion and reliability statistics at both the directional route and areawide level. The Program started in 2001 (with an analysis of 2000 data) in 10 cities. In 2004, the Program has grown to include nearly 30 cities with about 3,000 miles of freeway.

The concepts, performance measures, and data analysis techniques developed and used in the MMP are being considered for adoption and implementation by several State and local agencies. A few of these agencies have contacted the project team to request technical assistance or additional detailed information on performance monitoring or operations data archiving. Specifically, one of the two primary objectives of the MMP was to provide incentives and technical assistance for the implementation of data archiving systems to support performance monitoring. Several examples of these technology transfer and implementation activities are:

  • Data quality control procedures have been developed for archived traffic data. Many locally developed archives are now using these procedures.
  • Customized local analyses have been performed on a selective basis. As a way to promote local use of the archived data, the MMP team has demonstrated how their data may be used to supplement traffic counting programs (Phoenix and Cincinnati) and as input to air quality models (Louisville and Detroit).
  • A data warehouse of archived traffic data that has been checked for quality and put into a standard format is available for research and other FHWA purposes. For example, the data are being used now in FHWA’s “Estimating the Transportation Contribution to Particulate Matter Pollution” project and is being considered as a validation source for FHWA’s “Next Generation Traffic Simulation Models” project.

The MMP also reports data gathered through FHWA’s Intelligent Transportation Infrastructure Program (ITIP). The ITIP is an ongoing program designed to enhance regional surveillance and traffic management capabilities in up to 21 metropolitan areas while developing an ability to measure operating performance and expanding traveler information through a public/private partnership involving the FHWA, participating State and local transportation agencies, and Mobility Technologies, Inc. Under this partnership, Mobility Technologies is responsible for deploying and maintaining traffic surveillance devices, and integrating data from these devices with existing traffic data to provide a source of consolidated real-time and archived data for the participating metropolitan areas. As of late 2004, deployment has been completed in Philadelphia, Pittsburgh, Chicago, Providence, and Tampa. Deployment is also under way in Boston, San Diego, Washington DC, Phoenix, Los Angeles, San Francisco, Detroit, St. Louis, and Oklahoma City. Negotiations are currently active in 7 additional cities.

Part of ITIP is the production of performance measures on a routine basis. The metrics used to report performance are based on those in the MMP: travel time index, buffer index, percent congested travel, and total delay. Performance measure reports are provided to the U.S. Department of Transportation (DOT) and FHWA on a monthly and annual basis, as well as being included in corresponding MMP reports. The monthly reports for each completed metropolitan area are based on monthly data and are presented with similar content and in a format consistent with the city reports that are part of the Mobility Monitoring Program.

Urban Congestion Reporting Program

The Urban Congestion Reporting (UCR) Program is sponsored by FHWA to provide a monthly snapshot of roadway congestion in 10 urban areas using three national composite measures. UCR utilizes efficient, automated data collection procedures (colloquially known as “screen scraping” or “web mining”) to obtain travel time directly from traveler information web sites and archives them at five-minute intervals on the weekdays when these services are available. Since a monthly report can be rapidly constructed (within 10 working days), UCR serves as an early warning system for changes in urban roadway congestion. Concurrent with the travel time data collection, other UCR acquisition programs obtain web-based data on weather conditions and traffic incidents (work zone activity is planned). This allows the UCR monthly report to include not only congestion level, but a range of possible contributing factors. A one-page overview tells the congestion story each month in a graphical manner for the analyst or administrator wanting a timely composite overview of congestion trends on a month-to-month basis.

Urban Mobility Study

The Urban Mobility Report (UMR) tracks congestion patterns in 85 of the largest metropolitan areas, with historical data dating back to 1982. The UMR has been instrumental as both a source of trend information and development of the concepts and metrics for congestion monitoring. For example, the widely used travel time index is a performance measure concept originating from the UMR. The UMR relies on the Highway Performance Monitoring System (HPMS) as it source of information. It uses the average annual daily traffic (AADT) and number of lanes data in HPMS as a basis for its estimates; these are then translated into congestion metrics using predictive equations that have been developed and tested specifically for the UMR. Beginning in 2002, the UMR is also considering the positive effects that operational strategies have on system performance; these are accounted for as adjustments to the base performance predicted by AADT and number of lanes. The UMR has widespread visibility both within the transportation profession as well as with the general public; annual release of the UMR generates a significant amount of media interest and coverage.

Comparison of Programs

All three of these current national programs use data collected for other purposes. This is particularly true for the UCR and MMP, which use operations-based data. Not having to implement a special data collection program solely for performance monitoring is a powerful argument for FHWA to make – it shows that the agency is using its resources wisely. As indicated earlier in Table 1, the three performance monitoring programs use different data and have slightly different outputs. The strengths and weaknesses of the three efforts may be summarized in the following paragraphs.

The UMR has the longest history available, provides a widely accepted benchmark for comparisons, and covers all major freeways and arterial streets in an area. Up until now, it has served as the basis for FHWA performance reporting to others in U.S. DOT. However, since it is based on transforming HPMS data (annual average daily traffic volumes and number of lanes) into congestion metrics, it provides only an indirect estimate of congestion at an areawide level and doesn’t consider travel time reliability. Also because of its reliance on HPMS, there is a substantial lag in reporting (typically 18 months).

The UCR has been the timeliest of the three programs, providing monthly congestion and reliability statistics. However, it is likely that MMP will also be able to provide similar monthly reports for a greater number of cities (20 cities estimated by mid-2005). The UCR program provides a general assessment of events that influence congestion (incidents and weather, with work zones planned). Since it is based on whatever information is posted to websites, highways other than freeways can be included (although this coverage is now very limited). However, the UCR data sources (website-based travel times, sometimes self-reported by commuters) are limited to those highways covered by websites that offer travel time reports. Also, because traffic volumes are not available, total delay is not available and relative comparisons between cities can be misleading. Generally, the number and quality of traveler information services have been improving over time, but these external changes can have a significant effect on UCR congestion measures as a whole.

The MMP provides the most detailed picture of congestion (in terms of time and geographic scales reported as well as multiple reliability statistics and VMT-weighted results). It also serves as an outreach mechanism promoting the use of performance measures, quality control procedures, and archived data to State and local agencies. Recent improvements to automation have improved the reporting lag, and the program team has begun producing monthly reports for 10 cities, with plans to expand this to 20 cities by mid-2005.

Additionally, there are two other efforts that also relate to national monitoring of traffic congestion and travel trends:

  • Highway bottleneck study — In a 2004 report titled “Unclogging America’s Arteries,” the American Highway Users Alliance identified 233 highway bottlenecks where delay exceeded 700,000 annual hours of delay.3 The report also provided an in-depth analysis of the worst 24 bottlenecks in the country, in which the delay for each of the bottlenecks was greater than 10 million hours annually. A previous edition of this report on highway bottlenecks was published in 1999.
  • National Household Travel Survey (NHTS) — The NHTS is a U.S. DOT effort sponsored by the Bureau of Transportation Statistics (BTS) and the FHWA to collect data on both long-distance and local travel by the American public.4 The joint survey gathers trip-related data such as mode of transportation, duration, distance and purpose of trip.

3 Cambridge Systematics, Inc. Unclogging America’s Arteries: Effective Relief for Highway Bottlenecks. American Highway User’s Alliance, February 2004, available at http://www.highways.org/.
4 See http://www.bts.gov/programs/national_household_travel_survey/ for more information on the National Household Travel Survey.

The highway bottleneck study and the NHTS are mentioned here for completeness, but the results of these activities are not directly comparable with the three other programs mentioned earlier. The highway bottleneck study only considers specific locations and does not address areawide estimates of congestion, nor does it address annual trends. The NHTS relies on self-reporting of person trips, and as such, does not use the same methods of traffic data collection as employed in several of the congestion monitoring programs. For example, the NHTS reports average travel times to work (as reported by commuters) but does not report average speeds for these trips.

Examples of State and Local Programs

There are numerous State and local performance monitoring programs that have similar objectives to the Mobility Monitoring Program, with the major difference being the geographic scale. In fact, several of these State and local performance monitoring programs use the same archived traffic data as is used in the Mobility Monitoring Program. This section briefly highlights several examples of these programs. Additional information and detailed case studies on State and local performance monitoring programs will be included in a forthcoming NCHRP report on freeway performance monitoring.5

5 See NCHRP Project 3-68, “Guide to Effective Freeway Performance Measurement,” http://www4.trb.org/trb/crp.nsf/All+Projects/NCHRP+3-68″.

Atlanta, Georgia

The Georgia DOT has developed an Operations Business Plan that is driving the implementation of performance measures at NaviGAtor (http://www.georgia-navigator.com/), the regional traffic management center for Atlanta. Their Operations Business Plan follows the “vision-goals-objectives-performance measures-targets-actions” sequence for achieving change via a performance-based process.

The NaviGAtor staff uses output measures for incident and staff efficiency to provide benefits information to GDOT management and to the public in a weekly newsletter format. The incident and traveler information data is published monthly and distributed to GDOT management staff and others. The staffing measures are used to measure personnel performance, to adjust staff size and hours and to better define the operators shift hours. The HERO (service patrol) drivers incident data is used to adjust individual patrol routes for the HERO drivers and to define the HERO divers need per shift.

Some of the output measures currently used by NaviGAtor include:

Outcome

  • Hampered by data quality concerns. Currently experimenting with a two categories of congestion: Moderate (speeds between 30 and 45 mph) and severe (less than 30 mph). Considering additional performance measures including reliability.

Output

  • Traveler Information Calls
    • Total calls
    • Calls per day
    • Calls per route
    • Calls by type of call
    • Average call length
    • Average answer time
  • Incidents managed
    • By category
    • Detection method
    • Impact levels (general categories)
  • Number of construction closures
  • Device Functioning
  • % time devices are available
  • Number of media communications by outlet
  • Website visits by type of information requested

Minneapolis-St. Paul, Minnesota

In an annual Departmental Results report, the Minnesota DOT (MnDOT) tracks a number of performance measures statewide. Performance measures have therefore become part of an institutional reporting process. Performance data also serve as a basis for Metro Council plans and reports such as the Regional Transportation Plan, the Transportation Improvement Plan, the annual update of the Transportation Systems Audit and various operations studies. Although performance measures generally are not linked directly to specific investments, the findings and recommendations of the plans ultimately play a part in influencing investment decisions.

The operations outcome measures of travel speed and its derivatives of travel time and reliability are just now being developed for use by the regional traffic center staff. The primary reason for the previous non-usage is data quality as described in the Data Quality section and the recent move to the new regional traffic center building. The regional traffic center staff uses the output measures for incident and staff efficiency to provide benefits information to MnDOT management and to the public. The incident data is published monthly by regional traffic center staff and distributed to MnDOT management staff. The staffing measures are used to measure personnel performance, to adjust staff size and hours and to better define the operators shift hours. The FIRST (i.e., incident response) drivers’ incident data is used to adjust individual patrol routes for the FIRST drivers and to define the FIRST divers need per shift.

The operations agencies plan to track these performance measures:

  • Average incident duration
  • Percent of highway miles w/peak period speeds < 45mph
  • Travel Time Index
  • Travel times on selected segments, including mean, median, and 95th percentile

The planning agencies plan to track these performance measures:

  • HOV usage
  • Roadway congestion index
  • Percent of daily travel in congestion
  • Percent of congested lane-miles in the peak period
  • Percent of congested person-miles of travel
  • Annual hours of delay
  • Change in citizen’s time spent in delay
  • Congestion impact on travel time
  • Travel Time Index

Phoenix, Arizona

The original impetus for performance monitoring in Phoenix was to support the Arizona DOT’s Strategic Action Plan, which is performance-based. Performance measures are at the core of this effort. However, agencies are discovering uses for performance measures beyond fulfilling the requirements of the Strategic Action Plan.

Performance measures are used by the traffic management center staff for operations, emergency response and traveler information applications. Each measure is employed to achieve the objectives set forth in the Arizona DOT’s Strategic Action Plan. The monitoring of speed and volume using archived traffic data allows center staff to measure the average percentage of Phoenix freeways reaching level of service “E” or “F” on weekdays to determine if the Group’s objective of operating 60 percent of the freeways at a level “D” or better during rush hour is met. For freeway construction, performance measures are used in three ways. First, the measures help bolster priorities for freeways versus other transportation projects. They also provide justification of the one-half cent sales tax for construction of controlled-access highways. Lastly, they are used to prioritize implementation.

The operations agencies track these outcome performance measures:

  • Speed
  • Average % of freeways reaching LOS E or F on weekdays
  • Traffic volumes/counts
  • Vehicle occupancy

The planning agencies track these performance measures:

  • Congestion Index (% of posted speed)
  • Travel time
  • Segment delay (seconds/mile)
  • Stop delay (<3mph) (seconds/mile)
  • Average speed (% of posted speed)
  • Average speed (mph)
  • Average HOV lane speed (mph)
  • Running speed ((length/travel time)-stop delay)
  • Total volume
  • HOV lane volume
  • General purpose lane volume
  • % peak period truck volume
  • % peak period volume
  • Lane-mile operating at LOS F
  • Hours operating at LOS F

Seattle, Washington

A mandate from state legislature resulted in the annual performance report known as Measures, Markers, and Mileposts, the “Departmental accountability” report published by the Washington State DOT (WSDOT) each quarter to inform the legislature and public about how the Department is responding to public direction and spending taxpayer resources.6 Agencies now use performance measures as part of everyday practice to help make informed decisions.

6 See http://www.wsdot.wa.gov/accountability/ for more information on WSDOT’s program.

WSDOT uses performance measures to help allocate resources, determine the effectiveness of a variety of programs, and help plan and prioritize system improvements, primarily from an operations perspective. A variety of measures are computed. Not all of these measures are routinely reported outside of the Department, but key statistics that describe either the current state-of-the-system, trends that are occurring, or the effectiveness of major policies are reported quarterly as part of the Department’s efforts to clarify why it is taking specific actions and to improve its accountability to the public and public decision makers. The planning groups use performance measures to compare the relative performance of various corridors or roadway sections under study. These aggregated statistics can also be converted to unit values (e.g., person hours of delay per mile) to further improve the ability to compare and prioritize the relative condition of corridors or roadway segments.

The operations agencies track these outcome performance measures:

Sample of Operations Planning/Output Measures

  • number of loop detectors deployed
  • number of loops functioning currently
  • percentage of loops functioning during a year
  • number of service patrol vehicles currently deployed
  • number of hours of service patrol efforts supplied by WSDOT
  • number of motorist assists by type of assistance provided
  • number, duration, and severity of incidents by location (roadway segment) and type of incident

Operations Planning/Outcome Measures

  • average vehicle volume by location and time of day and by type of facility (HOV/GP lane)
  • average person volume by location and time of day and by type of facility
  • frequency of severe congestion (LOS F) by location
  • average travel time by corridor and major trip (O/D pairs)
  • 95th percentile travel time by corridor and major trip (also reported as Buffer Time)
  • number of very slow trips (half of free-flow speed) that occur each year by time of day and major trip
  • amount of lost efficiency (speeds less than 45 mph) by location
  • number of times that HOV lanes fail to meet adopted travel time performance standards
  • percentage of HOV lane violators observed by monitoring location