The Department of Defense (DoD) finds it difficult to determine when higher technical ratings for contractor proposals justify paying a higher price. The acknowledged difficulty in making such decisions and the magnitude of DoD acquisitions for which trade-off decisions are needed demand resolution of this problem. The October 2010 Government Accountability Office (GAO) report GAO-11-8, Enhanced Training Could Strengthen DOD’s Best Value Tradeoff Decisions, emphasized the difficulty:
According to DOD officials, making sound trade-off decisions, and in particular, deciding whether or not a price differential is warranted, is one of the most difficult aspects of using a best value trade-off process.
The magnitude of technical/cost trade-off decisions is evidenced by the fact that best value process is applicable in roughly 95 percent of DoD’s new, competitively awarded contracts valued at $25 million or more as reported in that GAO report.
DoD’s contractor selection process is described by an April 1, 2016, memorandum on “Department of Defense Source Selection Procedures,” from the then Under Secretary of Defense for Acquisition, Technology, and Logistics Frank Kendall. Use of the guidelines in this memorandum in the contractor selection process is mandated by Defense Federal Acquisition Regulation Supplement (DFARS) PGI 215.3—Source Selection. The source-selection procedures in the 2016 memorandum are superior to and provide greater detail than the 2011 memorandum. Unfortunately, however, certain deficiencies in the 2011 memorandum, were carried over in the 2016 memorandum, resulting in cryptic contractor selection decision matrices. This article examines deficiencies in the 2016 memo’s method for rating contractor proposals and describing the importance of factors and/or subfactors. And it pursues recommendations for making improvements.
One approach this author used to understand weaknesses in the federal contractor selection practices involved evaluating GAO decisions that sustained protests from Aug. 1, 2010, to July 31, 2012. The GAO’s stated reasons for sustaining protests can be considered as weaknesses in the contractor selection process. GAO-sustained protests during 2015 and 2016 were selected randomly to verify the continuation of those weaknesses. All 65 GAO-sustained protests during those 2 years were evaluated. The author noted one pertinent weakness in DoD’s contractor selection process that GAO did not mention: The decision matrices prepared by the source-selection evaluation board (SSEB) and presented to the source-selection authority (SSA) often were cryptic or in other ways failed to facilitate technical/cost trade-off decisions. The SSA is sent a narrative of the SSEB’s proposal evaluation activities, but the decision matrices offer an overview of that evaluation and provide the SSA with meaningful additional insight.
I believe that the cryptic contractor selection matrices are due to the DoD restriction of numerically indicating the importance of evaluation factors and of numerically rating contractor proposals. The 2016 memo does, however, allow waivers of the prescribed proposal rating method for technical factors and subfactors. The 2016 memo continues the 2011 memo requirement that only relative terms be used to represent the importance of factors and/or subfactors considered in selecting contractors; adjectival or color schemes may be used to score the proposals. Notably, the numerical rating of factors and/or subfactors is permitted by Federal Acquisition Regulations (FAR) Section 15.305(a), and numerical representation of the importance of evaluation factors also is permitted for Armed Forces contracting by 10 U.S. Code §2305.
Other independent research into GAO-reviewed protests identified a way to evaluate proposals that could resolve DoD’s technical/cost trade-off dilemma. That 2015 research into the best state and local government contracting practices revealed how to clearly identify best value proposals. The findings involved 15 states, 16 large cities, and three other local agencies and was a follow-up to 2006 research.
Two significant errors were found in the proposal scoring by the majority of the government entities reviewed in the 2015 research. But, after correction of those errors, their approach to selecting contractors was found to be superior to DoD’s method of identifying best value proposals. The errors had involved using anomalous formulas for weighing proposal evaluation scores, and the agencies using them were provided with corrected formulas. Although a formal follow-through was not deemed appropriate, a separate research project noted that a significant number of government entities adopted the corrected formulas. With that correction of the formulas, the revamped state and local government proposal evaluation process would greatly benefit DoD’s technical/cost trade-off identification of contractors proposing the best values. The approach recommended here, besides better identifying best value, will guard against corruption of the contractor selection process.
Two recent examples of contractor selection decision matrices that obscure identification of contractors submitting best value proposals are provided below. The first example (Table 1) comes from GAO’s decision in the matter of Patricio Enterprises, Inc.; File B-412740, B-412740.3, and B-412740.4, May 26, 2016. This illustrates the difficulty in making technical/cost trade-off decisions.
The proposed management and staffing capability of Patricio Enterprises was undeniably superior to its competitor, GID. The price proposed by GID and the GID past performance rating favored contract award to GID. The solicitation stated that the evaluation factors (management and staffing capability, price, and past performance) were to be listed in descending order of importance. In making the contractor selection decision, the SSA could justify a statement that the superior management and staffing capability of Patricio outweighed the lower pricing and better past performance of GID. But the SSA could just as easily have stated that GID’s lower price and better past performance outweighed Patricio’s superior management and staffing capability. The best value proposal was not expressly identified in this decision matrix.
The other recently published decision (Table 2) is from GAO’s decision in the matter of Jacobs Technology, Inc. (JTI); File B-413389 and B-413389.2, Oct. 18, 2016. In this example of a sustained protest, the decision matrix illustrated the tendency to obtain tied scores despite discerned differences by the SSEB in the quality of the proposals.
Apart from the proposed and evaluated cost/price, all the ratings reflected in Table 2 are identical. The following narrative excerpted from the GAO decision, however, indicated differences were discerned in the quality of numerous factors:
An agency source selection advisory council (SSAC) then conducted a comparative assessment of the offerors’ proposals. The SSAC found that, notwithstanding the equivalent ratings, AS&D’s proposal was superior to that of JTI under the scenario, program management, and phase-in plan subfactors (the offerors were considered equal under the subcontract management subfactor). … Similarly, the SSAC found, notwithstanding the equal ratings, JTI’s past performance to be superior in relevance and quality to that of AS&D.
The tendency to reflect tied scores despite factor quality disparities should be disturbing to those interested in the availability of an effective tool for making intelligent contractor selection decisions.
As to the importance of the evaluation factors, the solicitation for the above procurement action stated the following:
The solicitation established that contract award would be made on a “best value” basis, based on three evaluation factors generally in descending order of importance: technical risk (hereinafter, technical); past performance; and cost/price. … The technical factor was comprised of four subfactors in descending order of importance: scenario; program management; subcontract management; and phase-in plan. … The technical and past performance evaluation factors, when combined, were significantly more important than cost/price.
To illustrate a decision matrix decidedly identifying the best value proposal, the author applied his best efforts to assign numerical values to the significance of evaluation factors based on the relative ranking provided in the Request for Proposals (RFPs). Best efforts were also used to assign numerical proposal evaluation scores based on the comparative relationship of factor scores, as described in GAO’s decision.
These best- efforts numerical scores are used in the illustration (Table 3) to show how using them can avoid tied scores by permitting evaluators to reflect the quality differences discerned between competing proposals. Numerical representations of the importance of proposal evaluation factors and numerical scoring of proposals permit weighing of proposal evaluation scores and proposed values to facilitate unambiguous identification of the best-value proposal. The formulas used to weigh the proposed values and proposal evaluation scores will be described later in this article.
In the Table 3 example, AS&D, with the highest Total Weighed Score (TWS) of 96.71, is identified as the contractor proposing the best value. This determination is based on the author’s best efforts to quantify DoD’s statement regarding the factors’ relative importance, the contractor’s proposed price, and the adjectival proposal evaluation scores. This example, however, merely illustrates the superiority of the TWS approach. The need to use best efforts for the numerical values in Table 3, does not support the argument that the proposal of AS&D should have been awarded the contract. In an actual procurement, numerical weights and the numerical scoring scheme would have been described in the solicitation. The proposal evaluation team also would have assigned numerical scores in reviewing the proposals.
The TWS approach to evaluating proposals, in addition to distinctly identifying the best-value proposal, provides the needed transparency so that prospective contractors can understand the precise factor weights and proposal scoring. The transparency, moreover, prevents that rare corrupt government official from manipulating the relative value of proposal evaluation factors/subfactors to justify selecting a favored contractor in exchange for a personal benefit. When an RFP numerical representation of the importance of contractor selection factors, that corrupt official will not be able to manipulate factor weights to benefit a favored contractor.
The statement indicating that “technical and past performance evaluation factors, when combined, were significantly more important than cost/price” injects considerable flexibility for concocting and rating contractor selection factors/subfactors. The factor weight for cost/price could have been within the range of 10 to 25 and still have been considered significantly less important than the combined weight of technical and past performance evaluation factors. The statement regarding the relative importance of the nonprice factors and subfactors also permits flexibility in manipulating the relative weights during proposal evaluation to benefit one of the competing contractors.
Formulas for Weighing Proposal Evaluation Scores
The anomalous and correct formulas for weighing proposal evaluation rating raw scores for factors such as technical and past performance are shown below:
The difference between the anomalous and the correct formula for weighing proposal evaluation scores involves using either the highest possible or the highest actual proposal evaluation score. The necessity of adding the weighed scores for both the proposal evaluation scores and the proposed values requires compatibility between their respective formulas. The formula for weighing proposed value scores must include the highest actual proposed value because there can be no highest possible proposed value. Using the highest possible proposal evaluation score in the formula, therefore, distorts the algorithm for calculating the TWS. Use of the anomalous formula may result in an understatement of the weighed value for proposal evaluation scores.
The problem with an understated weighed value for a proposal evaluation score is illustrated in the following example where the experience and price factors are both weighted as 10. Evaluation of the proposal with the highest score for a technical factor (experience) results in a score of 85 on a scale of 70-100. When the anomalous formula is used to weigh the score of 85, the result is 8.5. When the lowest price is weighed with the formula for proposed values, the result is 10.0. The weighed score of 8.5 for the highest rated experience factor is clearly understated when compared to the weighed score of 10.0 for the lowest price. When both factors are weighted the same, the highest rated experience factor and the lowest price factor should have identical weighed scores.
Government personnel more concerned with technical performance and less so with price would have a legitimate reason to object to the calculation of weighed scores with the anomalous formula. Technical personnel are likely to object to a weighed score for the lowest price that is higher than the weighed score for the highest-rated technical factor when both factors have equal weights. When applying the correct formula, the lowest price and highest-rated technical factor will have identical scores when the factor weights are equal.
Formulas for Weighing Proposed Values
Formula A and Formula B, below, represent two differing methods that state and local government agencies express the anomalous formula for weighing proposed values.
The purpose of this formula is to weigh proposed values according to the factor weight and convert low proposed values to high weighed scores. This approach is appropriate for factors, such as price, where low values are favorable to the government. Lower proposed values are also favorable to the government for factors such as “weight” for products that will be placed in space orbit. The formula anomaly is characterized by an underrepresentation of midlevel values when three or more contractors compete for a contract. The anomaly is best demonstrated by considering three proposed equidistant prices. In this example, the equidistant prices are $700 million, $800 million and $900 million, with an equal difference of $100 million between the low and midlevel prices as well as between the midlevel and the highest prices. The factor weight in this example is 30. When equidistant proposed prices are weighed and low prices are converted to high scores, the weighed values should also be equidistant. Weighing the above proposed prices with the anomalous formula yields the following results:
Results With Anomalous Formula
Results With Correct Formula
The weighed scores with the correct formula are equidistant because the difference in the weighed scores for $700 million and $800 million is 3.33 and the difference between the weighed scores for $800 million and $900 million is 3.34. The .01 difference between 3.33 and 3.34 is a rounding error.
When using the anomalous formula to weigh proposed values where low numbers favor the government, the weighed scores for the highest and lowest proposed values will be accurate. The weighed score for the midpoint values, however, will be understated. The TWS will, therefore, be inaccurate if using an anomalous formula thus subjecting the government to the possibility of selecting a contractor other than the one offering the best value.
The author’s contention is that the difficulty DoD acknowledges in making technical/cost trade-off decisions results primarily from its restrictions against numerical representations for the importance of factors and subfactors, and against numerical rating of contractor proposals. State and local government agencies often represent the importance of proposal evaluation factors and rate proposals numerically. These government contracting agencies use formulas to weigh the proposal evaluation ratings, as well as the proposed values such as price, to obtain a TWS representing the score for and the importance of each evaluation factor/subfactor. The formula for weighing price, and other proposed values where low values are favorable to the government, also converts low proposed values to high scores. When weighed scores for factors and/or subfactors are totaled, the result is a TWS. The contractor receiving the highest total numerical score is identified as having submitted the best value proposal.
DoD would benefit from use of the TWS contractor selection process that distinctly identifies the contractor offering the best value proposal, simplifies the technical/cost trade-off decision, and inhibits procurement corruption.
William Sims Curry is president of WSC Consulting in Chico, California. He is the author of second edition of “Government Contracting: Promises and Perils, 2nd Edition,” published by Routledge, 2017.
The author can be contacted at email@example.com.
To download a PDF version of this edition of the "Defense AT&L" magazine (including this article) click here.