• This page, Audit of the State Ethics Commission Objectives, Scope, and Methodology, is   offered by
  • Office of the State Auditor

Audit of the State Ethics Commission Objectives, Scope, and Methodology

An overview of the purpose and process of auditing the State Ethics Commission.

Overview

In accordance with Section 12 of Chapter 11 of the Massachusetts General Laws, the Office of the State Auditor has conducted a performance audit of certain activities of the State Ethics Commission (SEC) for the period July 1, 2022 through June 30, 2024.

We conducted this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.

Below is a list of our audit objectives, indicating each question we intended our audit to answer and the conclusion we reached regarding each objective.

ObjectiveConclusion
  1. To what extent did SEC ensure that Statements of Financial Interests (SFIs) were filed on time and completed in accordance with Sections 3(f), 5(a)–(c), and 5(g)(10) of Chapter 268B of the General Laws?
To a sufficient extent
  1. Did SEC conduct initial staff reviews of violations involving conflicts of interest and the filing of SFIs in a timely manner in accordance with its internal practices?
Yes
  1. Did SEC issue advisory opinions in a timely manner in accordance with its internal practices?
Yes

To accomplish our audit objectives, we gained an understanding of SEC’s internal control environment relevant to our objectives by reviewing applicable policies and procedures and by conducting interviews and walkthroughs with SEC management. Upon review of SEC’s internal control plan (ICP), we noted that SEC does not update its ICP annually, as required by the Office of the Comptroller of the Commonwealth’s Internal Control Guide. See Finding 1 for more details.

In addition, to obtain sufficient, appropriate evidence to address our audit objectives, we performed the procedures described below.

Filing of SFIs

To determine whether SEC ensured that SFIs were filed on time in accordance with Sections 3(f), 5(a)–(c), and 5(g)(10) of Chapter 268B of the General Laws, we selected a statistical1 sample of 60 SFIs filed by public employees and public officials. This sample was drawn from the total population of 6,821 SFIs filed during the audit period, using a 95% confidence level,2 a 0% expected error rate,3 and a 5% tolerable error rate.4 In addition, we selected a nonstatistical,5 random sample of 35 SFIs filed by candidates from the total population of 138 candidate filings during the audit period. In total, we reviewed 95 SFIs from a combined population of 6,959 SFIs.

For each of the 95 SFIs in our sample, we identified the filer type (public employee, public official, or candidate), determined the applicable statutory deadline, compared the deadline to the submission date recorded in SEC’s SFI filing system or on the hardcopy SFI, and calculated the number of days between the two to identify late filings. Where applicable, we determined whether an extension had been granted.

We inspected the SFIs submitted to SEC to confirm that all required form fields were completed. For incomplete filings, we evaluated whether SEC followed up with the filer and obtained a reasonable explanation and remediation.

We did not identify any exceptions in our testing. Therefore, we concluded that, during the audit period, SEC ensured that SFIs were filed on time in accordance with Sections 3(f), 5(a)–(c), and 5(g)(10) of Chapter 268B of the General Laws.

SEC’s Investigations of Conflicts of Interest and Financial Disclosure Law Complaints

To determine whether SEC conducted initial staff reviews of complaints involving the conflict of interest law and the filing of SFIs within 90 days, in accordance with its internal practices, we selected a statistical sample of 60 complaints from the total population of 1,809 complaints received by SEC during the audit period. We selected this sample using a 95% confidence level, a 0% expected error rate, and a 5% tolerable error rate.

For each of the 60 complaints in our sample, we calculated the number of days between the receipt and closure of each complaint by validating the dates recorded on complaint letters, faxes, online submission forms, hardcopy intake forms, emails, and closing letters to determine whether initial staff reviews were completed within 90 days of SEC’s receipt of the complaint. Where applicable, we obtained and documented reasonable justifications for delays in completing the initial staff reviews.

We did not identify any significant exceptions in our testing that must be reported under generally accepted government auditing standards. Therefore, we concluded that, during the audit period, SEC conducted initial staff reviews of complaints involving the conflict of interest law and the filing of SFIs in accordance with its internal practices.

Advisory Opinions

To determine whether SEC issued advisory opinions within 30 days, consistent with its internal practices, we analyzed all 11,457 requests for advice during the audit period. We calculated the number of days between the date each request was received and the date it was closed to determine whether SEC responded within its 30-day goal.

We selected a statistical sample of 75 requests from a population of 11,457 requests for advice from the audit period using a 90% confidence level, a 20% tolerable error rate, and a 50% expected error rate. We examined case management system notes and related correspondence, such as emails and letters, to confirm that any delays were supported by reasonable and documented explanations, such as legal complexity, additional fact-finding, or staffing constraints.

We did not identify any significant exceptions in our testing that must be reported under generally accepted government auditing standards. Therefore, we concluded that, during the audit period, SEC issued advisory opinions in a timely manner consistent with its internal practices.

We used a combination of statistical and nonstatistical sampling methods for testing, and we did not project the results of our testing to any population. 

Data Reliability Assessment

SFI Filing System

To determine the reliability of SEC’s SFI filing system data, we interviewed SEC officials who were knowledgeable about the system and responsible for its oversight in order to gain an understanding of the system’s controls and data management processes. We reviewed select information system controls, including security management, access controls, configuration management, and contingency planning.

Through this testing, we noted some issues with internal controls over SEC’s SFI filing system. We discussed these issues with SEC management.

We compared the total number of records in the list of SFI filings that we received from SEC to the number we observed during the data extraction process to confirm completeness. We also tested the data to ensure that it did not contain certain dataset issues (i.e., duplicate records, blank fields, and data corresponding to dates outside the audit period).

Additionally, we selected a random sample of 10 SFI filings from the list and traced filer information (first name, last name, filer type, and submission date) to the hardcopy SFIs to verify accuracy. We also selected 10 hardcopy SFIs from SEC’s filing cabinet and traced them to the list of SFI filings to confirm completeness.

Case Management System

To determine the reliability of the list of complaints and the list of requests for advice within the case management system, we interviewed SEC officials who were knowledgeable about the data and responsible for its oversight in order to gain an understanding of the system’s controls and data management processes. We reviewed select information system controls, including security management, access controls, configuration management, and contingency planning.

Through this testing, we noted some issues with internal controls over SEC’s case management system. We discussed these issues with SEC management.

We compared the total number of records on the list of complaints and the list of requests for advice that we received from SEC to the numbers we observed during the data extraction process to confirm the completeness of both lists. We also tested the data to ensure that it did not contain certain dataset issues (i.e., duplicate records, blank fields, illogical date sequences, non-consecutive incident numbers, and data corresponding to dates outside the audit period). We also compared the list of assigned investigators to SEC’s employee list to confirm that all investigators were SEC employees. We followed up with SEC officials to clarify any anomalies (such as illogical date sequences and missing values) in the data and obtained reasonable explanations for these anomalies.

For the list of complaints, we selected a random sample of 20 complaints submitted by fax or letter from the list of complaints and verified the dates of receipt and incident numbers against the original faxes or letters to ensure accuracy. We also selected 20 complaints submitted by fax and letter from SEC’s filing cabinet and traced the complaint sources, dates of receipt, and incident numbers back to the list of complaints to confirm completeness.

Based on the results of the data reliability assessment procedures described above, we determined that the information we obtained was sufficiently reliable for the purposes of our audit.

1.    Auditors use statistical sampling to select items for audit testing when a population is large (usually over 1,000) and contains similar items. Auditors generally use a statistics software program to choose a random sample when statistical sampling is used. The results of testing using statistical sampling, unlike those from judgmental sampling, can usually be used to make conclusions or projections about entire populations.

2.    Confidence level is a mathematically based measure of the auditor’s assurance that the sample results (statistic) are representative of the population (parameter), expressed as a percentage. A 90% confidence level means that 90 out of 100 times, the statistics accurately represent the larger population.

3.    Expected error rate is the number of errors that are expected in the population, expressed as a percentage. It is based on the auditor’s knowledge of factors such as prior audit results, the understanding of controls gained in planning, or a probe sample. In this case, we are assuming there are no errors in the data provided to us by the auditee.

4.    The tolerable error rate (which is expressed as a percentage) is the maximum error in the population that is acceptable while still using the sample to conclude that the results from the sample have achieved the objective.

5.    Auditors use nonstatistical sampling to select items for audit testing when a population is very small, the population items are not similar enough, or there are specific items in the population that the auditors determine are appropriate to review.

Date published: December 24, 2025

Help Us Improve Mass.gov  with your feedback

Please do not include personal or contact information.
Feedback