Overview
In accordance with Section 12 of Chapter 11 of the Massachusetts General Laws, the Office of the State Auditor has conducted a performance audit of certain activities of the Massachusetts Bay Transportation Authority (MBTA) for the period June 1, 2020 through December 31, 2023.
We conducted this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives, with the exception listed.
Paragraph 9.12 of the US Government Accountability Office’s Government Auditing Standards states, “Auditors should . . . report any significant constraints imposed on the audit approach by information limitations or scope impairments.” During our audit, we experienced constraints regarding our ability to obtain the information necessary to draw conclusions related to certain aspects of our first objective, in the areas of station maintenance, snow and ice removal, fare collection, fleet availability, and ADA inspections.
Below is a list of our audit objectives, indicating each question we intended our audit to answer; the conclusion we reached regarding each objective; and, if applicable, where each objective is discussed in the audit findings.
Objective | Conclusion |
---|---|
| No; see Findings 1, 2, 5, and 6 |
| No; see Finding 4 |
| Partially; see Finding 3 |
To accomplish our audit objectives, we gained an understanding of the aspects of the MBTA’s internal control environment relevant to the objectives by reviewing applicable policies and procedures and by interviewing officials at the MBTA and Keolis. We evaluated the design and implementation of the internal controls related to our audit objectives. We also tested the operating effectiveness of controls related to the authorization of the MBTA’s revenue-sharing invoices with Keolis. In performing our work, we found that the MBTA has not developed an internal control plan (see Finding 6).
In addition to the findings mentioned in the table above, the MBTA did not ensure that Keolis employees with access to its Train Resource Management System (TRMS) completed annual cybersecurity awareness training (see Finding 5 for more information). We also found that the MBTA did not ensure that Keolis complied with the requirements of the MBTA’s Disadvantaged Business Enterprise Program. See the “Other Matters” section of this report for more information.
To obtain sufficient, appropriate evidence to address our audit objectives, we performed the procedures described below.
Keolis Performance Evaluation
To determine whether the MBTA ensured that Keolis met the performance requirements outlined in Section 2 of Schedule 6.1 of the MBTA’s “Commuter Rail Operating Agreement 159–12” with Keolis, we took the actions described below.
On-Time Performance
We interviewed the MBTA’s director of transportation, acting director of transportation, and the deputy director of railroad operations finance to discuss the MBTA’s process for monitoring Keolis’s performance in this area. Keolis provided us with a list of all 508,550 scheduled train trips that took place during the audit period from TRMS. Of the 508,550 scheduled train trips that took place during the audit period, we determined that 20,530 scheduled train trips did not arrive on time after all waivers were applied. Using the 20,530 scheduled train trips that were not on time, we recalculated the dollar amount of the on-time performance penalty for each late trip and compared our calculation of the penalty to what the MBTA invoiced Keolis each month. For all instances where there was a variance in the dollar amounts of the penalties invoiced to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Additionally, of the 508,550 scheduled train trips, we determined that 488,020 scheduled train trips arrived on time after all waivers were applied. Using the 488,020 train trips that arrived on time, we summarized the train trips by month and train line. We then recalculated the system-wide adjusted performance16 to ensure that all scheduled train trips arrived on time at least 92% of the time each month during the audit period. We determined that 42 of the 43 months during the audit period qualified for the on-time performance17 incentive. We then recalculated the dollar amount of the on-time performance incentive by month and train line and compared our calculation of the incentive to what the MBTA paid Keolis each month. For all instances where there was a variance in the dollar amounts of the incentives paid to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based incentives and penalties for Keolis. See Finding 1a and 1c for more information.
Seat Availability
We interviewed the MBTA’s acting director of transportation and deputy director of railroad operations finance to discuss the MBTA’s process for monitoring Keolis’s performance regarding seat availability. We analyzed from TRMS each of the 508,550 scheduled train trips that took place during the audit period. Of the 508,550 scheduled train trips, we determined that 572 scheduled train trips did not meet the required number of seats. We then recalculated the dollar amount of the financial penalties by month and compared our calculations to what the MBTA invoiced Keolis each month. For all instances where there was a variance in the dollar amounts of the penalties invoiced to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Additionally, of the 508,550 scheduled train trips, we determined that 30,565 scheduled train trips exceeded the seat availability requirement during peak commuter hours and qualified for an incentive payment. We recalculated the dollar amount of the seat availability incentive and compared our calculation of the incentive to what the MBTA paid Keolis each month. For all instances where there was a variance in the dollar amount of the incentives paid to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based incentives and penalties to Keolis. See Finding 1b and 1c for more information.
Americans with Disabilities Act Compliance
We interviewed the MBTA’s director of mechanical to discuss the MBTA’s process for monitoring Keolis’s performance regarding certain aspects of Americans with Disabilities Act (ADA) compliance. The MBTA provided us with a list of all 70 failed ADA inspections that took place during the audit period, which were documented by the MBTA in a Microsoft Excel spreadsheet. For each of the 70 failed inspections, we determined whether the MBTA penalized Keolis for the failed inspection and we recalculated the penalty amounts invoiced to Keolis each month. For all instances where there was a variance in the dollar amounts of the penalties invoiced to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Paragraph 9.12 of the US Government Accountability Office’s Government Auditing Standards states, “Auditors should . . . report any significant constraints imposed on the audit approach by information limitations or scope impairments.” During our audit, we experienced constraints regarding our ability to obtain the information necessary to draw conclusions related to certain aspects of this objective. Specifically, we asked MBTA management to provide us with data for all ADA inspections. However, MBTA management told us that they do not retain reports or records of passing inspections. As a result, we were unable to independently verify the number and accuracy of passed ADA inspections. Due to this data limitation, we had to limit the scope of our audit to only include a review of failed ADA inspections.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based incentives and penalties to Keolis and did not maintain sufficient documentation of certain inspections designed to evaluate Keolis’s performance. See Finding 1b and Finding 2 for more information.
Fleet Availability
We interviewed the MBTA’s chief railroad officer, director of engineering, and the deputy director of railroad operations finance to discuss the MBTA’s process for monitoring Keolis’s performance regarding fleet availability. The MBTA provided us (from TRMS) with all monthly fleet availability reports that took place during the audit period. These reports detail the specific vehicles available for revenue service18 each day. From these reports, we identified the number of locomotives and coaches (including both bilevel and cab control cars) that were available for revenue service for each day during the audit period. For each day during the audit period, we determined whether Keolis had the required number of locomotives and coaches available for revenue service.
For each day, Keolis needed to have available for both the north and south divisions 62 locomotives, 359 coaches, 130 bilevel cars, and 61 cab control cars.
Paragraph 9.12 of the US Government Accountability Office’s Government Auditing Standards states, “Auditors should . . . report any significant constraints imposed on the audit approach by information limitations or scope impairments.” During our audit, we experienced constraints regarding our ability to obtain the information necessary to draw conclusions related to certain aspects of this objective. Specifically, we asked MBTA management to provide us with data for the availability of locomotives and coaches, the vehicle location (north or south division), and the time of day that the vehicles were in use/available. MBTA management told us that they could not provide us with the data for the audit period because the MBTA’s reports do not capture key details, such as the vehicle location (north or south division) or the time of day that the vehicles were in use/available. As a result, we were unable to determine whether the vehicles were available during peak periods and at the required locations each day, as specified in the contract. Instead, we were only able to determine whether the total number of vehicles met the required availability.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not maintain sufficient documentation of certain inspections designed to evaluate Keolis’s performance. See Finding 2 for more information.
Passenger Comfort and Service Amenities including Class C Cleaning, Vehicle Cleanliness, and Service Amenities for Passenger Comfort and Service Amenities
We interviewed the MBTA’s director of mechanical to discuss the MBTA’s process for monitoring Keolis’s performance in the areas of passenger comfort and service amenities, including certain cleaning services. The MBTA provided us with a list of all 8,522 completed inspection forms for Class C cleaning, Class A cleaning, and mechanical amenities that were completed during the audit period. We then stratified the population of 8,522 inspection forms into two categories based on whether the inspection passed or failed. During the audit period, there were 6,142 failed inspections and 2,380 passed inspections. We then further divided each of these two strata into two categories of inspection forms: inspection forms with blank key fields (such as inspector’s name, train number, inspection date, and location of the inspection) and inspection forms with complete information.
For the 2,380 passed inspections, we identified a total of 39 passed inspection forms with blank key fields. From the 39 passed inspection forms with blank key fields, we selected a random, nonstatistical sample of 10 passed inspection forms for testing. For the remaining 2,341 passed inspection forms with complete information, we sampled a random, statistical sample19 of 47 forms, using a 90% confidence level, a 0% expected error rate,20 and a 10% tolerable error rate.21 We identified the total number of cleaning points and other failures of mechanical amenities documented on MBTA inspection forms to verify the accuracy of each passed inspection.
For the 6,142 failed inspections, we identified a total of 398 failed inspection forms with blank key fields. From these 398 failed inspection forms with blank key fields, we selected a judgmental, nonstatistical sample of 50 failed inspection forms for testing. For the remaining 5,744 failed inspections, we took a random, statistical sample of 74 failed inspections, using a 90% confidence level, a 20% expected error rate, and a 50% tolerable error rate. We identified the total number of cleaning points and other failures of mechanical amenities documented on MBTA inspection forms to verify the accuracy of each failed inspection. We also reviewed MBTA penalty logs to ensure that each failed inspection was penalized. If a failed inspection did appear on the penalty log, then we requested the penalty summary to ensure that the correct dollar amount of the penalty was invoiced to Keolis. If the failed inspection did not appear on the penalty log, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based incentives and penalties for Keolis. See Finding 1a for more information.
Fleet Maintenance
We interviewed the MBTA’s director of mechanical to discuss the MBTA’s process for monitoring Keolis’s performance in this area. The MBTA provided us with a list of all completed fleet inspection forms during the audit period. The MBTA also provided us with all preventative maintenance work orders created by Keolis during the audit period. These were extracted by the MBTA from its Trapeze Enterprise Asset Management fleet maintenance software. For each month that the MBTA performed the required number of inspections for both locomotives and coaches, we recalculated the dollar amount of the financial penalties and compared our calculations to what the MBTA invoiced Keolis. For all instances where there was a variance in the dollar amounts of the penalties invoiced to Keolis, we followed up with the MBTA on these inconsistencies we identified.
For all 524 locomotives, we stratified the inspection forms into three strata: 331 inspections that passed, 179 inspections that failed, and 14 inspections with a blank pass/fail field. We took a judgmental, nonstatistical sample of 50 inspections. Our sample of 50 inspections included 10 inspections that passed, 26 inspections that failed, and all 14 inspections with a blank pass/fail field. For inspections that passed, failed, or had a blank pass/fail field, we identified any failed areas identified on the inspection form. If a failed area was identified, we searched Trapeze for a service request. MBTA inspectors are required to open service requests when they note a failure during an inspection. For all instances where the MBTA inspector identified a failed area and did not create a service request, we followed up with the MBTA on these inconsistencies we identified.
To determine whether the MBTA created a service request for all issues identified during the fleet maintenance inspections for the 1,065 coach fleet maintenance forms, we selected a random, statistical sample of 60 cab control coach and blind trailer coach fleet maintenance inspections, using a 95% confidence level, a 0% expected error rate, and a 5% tolerable error rate. For inspections that passed or failed, we identified any failed areas identified on the inspection form. If a failed area was identified, we searched Trapeze for a service request opened by the MBTA inspector. For all instances where the MBTA inspector identified a failed area and did not create a service request, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based penalties to Keolis. See Finding 1a and 1b for more information.
Train Staffing
We interviewed the MBTA’s director of transportation to discuss the MBTA’s process for monitoring Keolis’s performance regarding train staffing. We analyzed (within TRMS) each of the 508,550 scheduled train trips. Of the 508,550 scheduled train trips, we determined that 1,604 scheduled train trips did not meet the required train staffing. We then recalculated the dollar amount of the financial penalties by month and compared our calculations to what the MBTA invoiced Keolis each month. For all instances where there was a variance in the dollar amounts of the penalties invoiced to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Additionally, of the 508,550 scheduled train trips, we determined that 81,883 scheduled train trips exceeded the train staffing requirement during peak commuter hours and qualified for an incentive payment. We recalculated the dollar amount of the train staffing incentive and compared our calculation of the incentive to what the MBTA paid Keolis each month. For all instances where there was a variance in the dollar amount of the incentives paid to Keolis, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always accurately assess performance-based incentives and penalties for Keolis. See Finding 1b for more information.
Station Maintenance, Snow and Ice Removal, and Fare Collections
We requested inspection logs from the director of engineering to determine how many times the MBTA requested that Keolis maintain certain stations and remove snow and ice, and the MBTA’s monitoring of the timeliness of Keolis’s performance in these areas. We also requested the audit logs to determine how often the MBTA performed inspections of the fare collections to determine whether Keolis was penalized for all failed inspections.
Paragraph 9.12 of the US Government Accountability Office’s Government Auditing Standards states, “Auditors should . . . report any significant constraints imposed on the audit approach by information limitations or scope impairments.” During our audit, we experienced constraints regarding our ability to obtain the information necessary to draw conclusions related to certain aspects of this objective. Specifically, we asked MBTA management to provide us with a log of when they contacted Keolis for station maintenance, snow and ice removal, and audits of fare collections. MBTA management told us that they could not provide us with the data for the audit period because the MBTA does not log phone calls or emails, and it only retains records of inspections where Keolis fails to collect fares. As a result, we had to limit the scope of our review for these areas (station maintenance, snow and ice removal, and fare collections).
Based on the results of our testing, we determined that, during the audit period, the MBTA did not maintain sufficient documentation of certain inspections designed to evaluate Keolis’s performance. See Finding 2 for more information.
Operator Deliverable Requirement List Reports
To determine whether the MBTA enforced the required report deadlines in Section 2.1.4 of Schedule 3.14 of the MBTA’s “Commuter Rail Operating Agreement 159–12” with Keolis that are related to the performance requirements in Section 2 of Schedule 6.1 of this contract, we took the actions described below.
We interviewed the MBTA’s deputy director of contract management, director of engineering, supervisor of general accounting, acting director of transportation, the chief railroad officer, and the director of mechanical to discuss the MBTA’s process for monitoring Keolis’s performance in this area. Using the Appendix I from Schedule 3.14 of the operating agreement between the MBTA and Keolis, we identified 10,080 reports associated with the areas related to performance requirements. We analyzed all 10,080 and stratified the population of reports into two categories: 7,901 reports that are stored within TRMS and 2,179 reports that are stored on a shared drive. We included the 7,901 reports stored within TRMS in our testing for Objective 1 (described above), and therefore, we did not include these for this testing.
For the 2,179 reports (annual, quarterly, weekly, monthly, and daily) stored in Microsoft SharePoint,22 we used a 90% confidence level, a 50% expected error rate, and a 20% tolerable error rate to select a random, statistical sample of 73 reports. We then increased the sample size to 75 reports. For each report, we reviewed SharePoint and/or email submission dates to determine whether the report was delivered on time. For any reports that were delivered late, we calculated how many hours past due they were delivered and the dollar amount of the penalty that should have been incurred based on the number of hours past due each report was delivered. We then compared our calculation of the penalty amount incurred to what the MBTA invoiced Keolis each month.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not ensure that Keolis submitted required reports on time and did not penalize Keolis for noncompliance. See Finding 4 for more information.
Performance Incentive Payment Caps and Revenue Sharing
To determine whether the MBTA enforced the performance incentive payment caps outlined in Section 6.5 of Schedule 6.1 and Section 4.2 of Schedule 17 of the MBTA’s “Commuter Rail Operating Agreement 159–12” with Keolis, we took the actions described below.
Performance Incentive Payments
The MBTA provided us with all 42 monthly invoices created during the audit period, which showed a total of $26,010,234 in incentives that were paid to Keolis. We then determined whether the monthly incentive payments, by performance category, were within the monthly caps, whether the annual incentive payments, by performance category, were within the annual caps, and whether the annual total of all incentives paid by the MBTA were within the annual caps. For any incentives paid to Keolis that exceeded the monthly or annual cap, we followed up with the MBTA on these inconsistencies we identified.
Based on the results of our testing, we determined that, during the audit period, the MBTA did not always enforce the monthly and annual performance incentive payment caps. See Finding 3 for more information.
Revenue Sharing
We interviewed the MBTA’s executive director of the commuter rail to discuss the MBTA’s process for monitoring Keolis’s performance in this area. We analyzed all four annual invoices created during the audit period, which showed a total of $200,825 that was paid to Keolis and $1,885,872 that was earned by the MBTA. We examined each annual invoice and recalculated the revenue sharing between Keolis and the MBTA.
We noted no exceptions in our testing. Therefore, we determined that, during the audit period, the MBTA met the relevant criteria regarding revenue sharing.
We used a combination of statistical and nonstatistical sampling methods for testing, and we did not project the results of our testing to the corresponding population(s).
Data Reliability Assessment
Financial Management Information System
To determine the reliability of the data received from the MBTA’s Financial Management Information System, we interviewed MBTA officials who were responsible for oversight of the data. We also reviewed System and Organization Control reports23 that covered the audit period and ensured that certain information system control tests had been performed.
To test the accuracy of the monthly invoice data we received, we randomly selected a sample of 20 expenses listed on the MBTA’s accounts payable account and traced them to invoices. To test the completeness of the monthly invoice data we received, we selected a different random sample of 20 invoices and traced these back to the MBTA’s accounts payable account. In addition, for fiscal years 2021, 2022, and 2023, we compared the dollar amounts recorded in the MBTA’s general ledger for the commuter railroad and local subsidy expense categories to the corresponding amounts found in the MBTA’s audited financial statements.
TRMS
To determine the reliability of the list of train trips scheduled during the audit period, which Keolis provided to us, we interviewed MBTA and Keolis officials who were responsible for oversight of the data. In addition, we tested certain information system general controls24 over TRMS (for more information, see Finding 5). We also checked the list for any worksheet errors (hidden rows and columns, embedded data, and other contents) and duplicate records. We also checked for gaps in scheduled trip numbers and followed up on instances of inconsistencies in the numbering to determine whether there were any deleted scheduled trips.
Trapeze Enterprise Asset Management Fleet Maintenance Software
To determine the reliability of the work order and service request data received from the MBTA’s Trapeze Enterprise Asset Management Fleet Maintenance Software, we interviewed MBTA and Keolis officials responsible for oversight of the data. In addition, we tested information system general controls and policies related to access control over the Trapeze Enterprise Asset Management Fleet Maintenance Software. We also tested the work order and service request data for any worksheet errors (hidden rows and columns, embedded data, and other contents) and tested for duplicates in the data. We tested the work order and service request data for gaps in the work order or service request numbers and followed up on instances of inconsistencies in the numbering to determine whether there were any deleted work orders or service requests.
Form.com
Form.com is repository software that the MBTA uses to document inspections of Keolis’s performance related to passenger comfort and service amenities. We assessed the reliability of the data obtained from Form.com by interviewing MBTA’s personnel members responsible for the oversight of the data. We also checked the data inspection report worksheet for any gaps in the inspections documented and their corresponding inspection numbers, any test inspections recorded, any illogical dates, such as an inspection date that falls after a documentation date, and duplicate inspections. When we found discrepancies (such as missing data, duplicate records, or data entry errors), we brought them to the attention of MBTA management and worked with them to correct any discrepancies.
SharePoint
The MBTA and Keolis use SharePoint to transfer information relating to Keolis’s reporting obligations. We conducted interviews with MBTA and Keolis officials responsible for the data residing in SharePoint. In addition, we tested information system general controls and policies related to access control over SharePoint and tested selected access controls.
Excel Reports
The MBTA uses Microsoft Excel to document inspections of Keolis’s performance related to locomotive fleet maintenance and ADA compliance. We conducted interviews with the MBTA personnel responsible for the oversight of the data inspection reports. We checked the data inspection report worksheet for any illogical dates, such as an inspection date that falls outside the year of inspection, and for blank fields, such as the inspector’s name and whether the inspection resulted in a passing or failing score. When we found discrepancies (such as missing data or illogical dates), we brought them to the attention of management in the Commuter Rail Department and worked with them to correct any discrepancies.
Based on the results of the data reliability assessment procedures described above, we determined that the information we obtained was sufficiently reliable for the purposes of our audit.
Date published: | March 4, 2025 |
---|