top of page
Timelines Backdrop_edited.png
Search

How Do We Right the CMMC Ship?

Previously I wrote CMMC Trip to Tartarus story under the banner “CMMC is impossible and here is why!” I did not receive many comments that said this assertion was incorrect. Far fewer than I thought I would in fact, and I received many more “thank goodness someone finally said that,” comments than anticipated as well.

One comment that struck home however came from Brian Thompson on offering constructive solutions rather than just problems. I agree completely with Brian that we need to offer solutions rather than just problems, but in a CMMC Trip to Tartarus, I was trying to raise awareness on how the reality of the assessment process has become disconnected from the original vision of a maturity model with basic and moderate goals in the main.

Diagnose First

The first key to offering solutions should be diagnosing the problem. We all too often offer solutions when we have need really figured out what the cause of casualty actually is. This of course leads to a lot of energy spent doing things that don’t actually result in fixing the problem. Perhaps seemingly too obvious to mention, I think really understanding the problem is a great start and as we debate and examine solutions perhaps not enough time is being spent on diagnoses.

So what is the problem? Or is there one at all? Please add your thoughts and comments below. I am interested in the outlook of the professional community. At the very least there are a number of challenges! The strategic problem is the low level of cybersecurity maturity across the Defense Industrial Base (DIB).

Ultimately the entire purpose of CMMC is to raise the bar for the DIB in order to better protect America’s precious information assets. To the extent that CMMC helps solve that particular problem then CMMC is good, and to the extent that it hurts or lowers that security bar it is bad. This is the definition of success, or should be.

I would argue that raising the bar is not perfect security. It is not security that covers every angle and fully documents the inheritance of every control. DOCUMENTATION IS NOT SECURITY! This is one of my biggest issues with the way the Federal government has established the governance of its own cybersecurity.

There seems to be a presumption on the part of both the Government and many professionals with long experience inside of government, that the current USG approach to cybersecurity established in FedRAMP, NIST 800-53, FISMA, OMB Circular A-130, is good and effective. I beg to differ. It has substituted bureaucracy for security. The more important something is, and cyber is certainly at or near the top of the US priority list, the more bureaucracy we want to throw at it. This is expensive and not at all effective. We cannot enter into the debate on what the best way ahead is for CMMC with the presumption that the current USG model and controls are gold-plated good. They are not. Far from it.

This opinion crystallized for me several years ago when I was given the opportunity to review the last two years of “accomplishments” for the top cyber defense organization for a US Military Service. As I read the two documents I became more and more depressed. Every single bullet was bureaucratic, and not substantive. We completed an ATO for this, and an MOU for that, and Instruction for this other thing. Bureaucracy, bureaucracy, bureaucracy. This is NOT cyber defense. We are hiding the fact that we are failing to defend our networks and our nation behind a sea of paper and trying to convince ourselves this is worthwhile work.

We should not and cannot lose sight of raising the bar for real security in pursuit of unattainable zero-risk solutions with perfect bureaucratic coverage. Assessment is a means to drive maturity. Assessment and the work going along with them are tolerated overhead that should be minimized where possible. The goal is to do more real security, not more paperwork.

Two Problems with CMMC raising the bar

The two primary problems with CMMC assessments I laid out in the CMMC Trip to Tartarus blog were the adoption of the 171A assessment objectives, and the need for 100% to pass. Do you see other specific “problems” with the assessment approach? Do you think my diagnosis of the two main problems is incorrect? Welcome your comments below.

Addressing 171A

Let’s take the first of my identified problems; 171A. Briefly for anyone relatively new to these conversations 171A refers to NIST SP 800-171A which is the Assessment Guide created by NIST for NIST SP 800-171 which is the baseline standard for “moderate” security under the federal regulations for protecting Controlled Unclassified Information (CUI). 171 has been incorporated verbatim into CMMC, and 171A forms the basis for the CMMC Assessment Guides which also incorporate those recommendations verbatim as requirements.

On the surface, this seems to be an obvious and direct choice. This links closely to the derivative requirements of existing regulations (FISMA and its progeny) that apply to federal networks. Many in the Federal Government or long time professionals in the government cyber space think that this guidance for federal networks should be adopted for contractor networks. Some have even argued, incorrectly in my view, that these requirements already apply to contractor networks.

Trying to recreate, or apply the Federal way of doing this for their networks to contractors is a major mistake. The Federal mechanisms for cybersecurity compliance, as I rant on above, are at once enormously expensive and often of dubious benefit for real security. In my personal view, the current Federal mechanism offer enormously expensive mechanisms for papering over risk and hiding it, not mitigating it. They offer a mechanism for expending the enormous amounts of Federal dollars being appropriated for cyber defense and doing things the Federal enterprise knows how to do: Bureaucracy, Paperwork, Forms, Assessments, Studies, etc., while not really doing what needs to be done, real security operations. See my post here on cyber ops. Bureaucracy is much easier to build a program for, much easier to measure, and much easier to write reports off of to show we are doing something, but far less capable of doing the work that actually needs to be done. Some bureaucracy is needed. We have to have it, and CMMC needs assessment. We need to NOT let the bureaucratic requirements get out of control, and suck up the already limited DIB resources doing far more paperwork than real security. Bottom line we are walking down the road of applying what we are doing in the Federal space to contractors, and we should take a step back and realize that many of the Federal methods are abysmal at providing real security, and will not necessarily raise the bar on Cyber for the DIB.

171A not written as a 100% Audit standard

There are two interesting quotes out of NIST 800 171A that seem to have been lost in the approach to assessments that DoD and DIBCAC are using in setting the standard.

“The assessment procedures are flexible and can be customized to the needs of the organizations and the assessors conducting the assessments. Security assessments can be conducted as self-assessments; independent, third-party assessments; or government sponsored assessments and can be applied with various degrees of rigor, based on customer defined depth and coverage attributes. The findings and evidence produced during the security assessments can facilitate risk-based decisions by organizations related to the CUI requirements.” Page ii, under Abstract, 800-171A

And

“CAUTIONARY NOTE

The generalized assessment procedures described in this publication provide a framework and a starting point for developing specific procedures to assess the CUI security requirements in NIST Special Publication 800-171. The assessment procedures can be used to generate relevant evidence to determine if the security safeguards employed by organizations are implemented correctly, are operating as intended, and satisfy the CUI security requirements. Organizations have the flexibility to specialize the assessment procedures by selecting the specific assessment methods and the set of assessment objects to achieve the assessment objectives. There is no expectation that all assessment methods and all objects will be used for every assessment. There is also significant flexibility on the scope of the assessment and the degree of rigor applied during the assessment process. The assessment procedures and methods can be applied across a continuum of approaches—including self-assessments; independent, third-party assessments; and assessments conducted by sponsoring organizations (e.g., government agencies). Such approaches may be specified in contracts or in agreements by participating parties.” pg iv, NIST SP 800-171A

We have lost sight of this and set the standard of meeting every assessment objective with two forms of evidence. Reportedly DIBCAC required three forms of evidence for each Assessment Objective (AO) as listed in the NIST 800-171A. This is well beyond the envisioned application of this guide as originally produced.

In a way, this is requiring Level 5 maturity against Level 1 controls. I was talking with a colleague this evening about this and we shared impressions from the conduct of NIST Cybersecurity Framework (NIST CSF) assessments. In general, we apply the 5 levels of a maturity model to all of those controls. Organizations are then rated on their maturity against those controls for 1-5, or sometimes 0 to 5. With CMMC we have taken the approach of adding controls, not maturity, and are in essence requiring Level 5 maturity (perfectly documented and perfectly implemented and perfectly operating) against any implemented control. Honestly, this doesn’t really work well. Companies that are at a basic or moderate level of maturity are in general not able to implement any of their controls at Level 5 maturity. They just are not there yet.

Recommendation for 171A

For all of this, I think the solution is to drop the use of 171A assessment objectives altogether in the CMMC assessment guides. Define Basic and Moderate maturity for each “security requirement” as defined in 800-171 or control as used in 800-53, and give the company and the assessor room for risk and assessment. This is not perfect and is not meant to be. Perfection and zero risk are not achievable goals.

No Risk Acceptance and Scores must be 100%

We need a mechanism to allow for scores of less than 100% on everything. The current stated policy (and the lack of written official references of these interpretations is another major problem for another blog) is that failing a single Assessment Objective fails the entire control, and failing a single control fails the Assessment for that level. This will not work in the real world. It certainly does NOT work in the world where the USG applies cyber rules to their own networks.

There are several potential ways to approach this in the Assessment Guides, and each likely has its own positives and negatives. We must adjust the approach though. A scoring approach similar to the DCMA Self Assessment methodology with a minimum score per level might be one approach. Additional allowances for risk management and alternative controls might be another.

Recommendation for Risk Acceptance

We should utilize the working groups already available with the CMMC-AB, and rework the assessment guides. This should incorporate the Maturity Model concept, and input from Carnegie Melon, that underlies CMMC premise but has been essentially abandoned. This is no longer a scalable maturity approach. This is an audit against security controls that requires 100% conformance or fail. There are a number of available constructs from highly experienced professionals in the space. They will not all agree of course, and we will have to pick options with flaws.

Please put your ideas on how we might best address this aspect in the comments below.

Conclusion: CMMC can work

This can be done and CMMC can be administered, inside the current regulations, to work as a mechanism for raising the cybersecurity bar across the DIB without dumping billions into paperwork drills that don’t enhance security and crush small business. The risk to small businesses must be mitigated not just talked about. Those risks can be reduced, and real security and risk reduction can be achieved through this program. We need to be thoughtful and very deliberate in the approach, realize that we cannot drive instantaneous change but we can drive steady change.

bottom of page