Do I need to test Chef InSpec controls again Google Cloud Platform


We are using Chef InSpec to verify various aspects of our infrastructure, which resides in the Google Cloud Platform. We've based our profile on GitHub - GoogleCloudPlatform/inspec-gcp-cis-benchmark: GCP CIS 1.1.0 Benchmark InSpec Profile, using a slightly tweaked version of a subset of the controls.

In essence, the controls are tests, that run against our GCP projects. My question is - does it make sense to add any kind of tests for the controls? I'm talking about mocking responses from GCP to ensure that the controls behave in the expected way? In general, when using Chef InSpec, what kind of tests does it make sense to use?

In short - does one need to test the tests or am I going too far? :slight_smile:

Thank you.

Kind regards,

You can test your tests with configuration managment content which is a kind of test as well. Which you could setup a set of actions to configure your environment 'in compliance', 'out of compliance' or even 'some middle ground'. I wouldn't say it goes too far but be careful not to 'test private implementations' - aka not what unit tests are for - but to focus more on coverage of your expected end state and that your controls cover all the state - null, 1 element, multiple elements, corner cases that your environment could be in etc.

Take this small example for s3 testing -

If I were to write tests I would have tests for each of the logical 4 states this profile could hit - connection error, no buckets, 1 bucket, multiple buckets.

I am not testing if the s3 plural or singular resources work, I am not testing the aws ruby sdk etc.

1 Like

Thank you very much, @aaronlippold .

Hey @aaronlippold, just to clarify - you gave me a control. We have such controls. In our case, we've mostly focused on networking - controls 3.01 to 3.09 from this profile:

So my question is rather - I have these controls and I run them against GCP. The results gives me information about whether my GCP infrastructure is complaint or not. The question I was rather asking is - do I need to write some kind of tests, that cover the controls.

Hi, these are all good questions, so my question here is scope, and what are you testing, assuming the library your using has good unit and functional tests, your tests would not cover that scope, so I guess if you wanted to write another test to validate that the 10 inspec controls you are using and returning good data but then the question becomes what are you testing?

The inspec control implementation itself? If you already trust the resources/inspec ( given its tests ) and given you trust google authorship of the profile, then sure your tests could provide a secondary validation of both the 'answer given' by the profile and the retrieval of the data and the analysis of that data but you would only be getting validation.

I am not sure that mocking and testing every control of every profile would be the right scope of what verification we are really looking for given that to do so would be to rewrite each profile and control in another language. This would seem that we are not really doing what we want to do. So what do we want to ensure?

In our own testing of profiles we take a known non-configured state (vanilla) of a system and run the profile to produce a known result with known results. We then configure the system to a known expected state (hardened) and then the delta or difference between those states validates that the testing tools is working and that if the change between the two states goes from a known starting point A to an expected final state B then we have verified that the tests 'test' and validate as expected.


However it's the definition of the 'threshold.yml' for the hardened and vanilla states that really checks the final expected state is met and even my implicit acceptable change between them.

In your case you could use a like approach to configure your network to a known starting point - define a threshold - run the test, update the config to the expected final result, test for the new threshold and that inspec returned the two expected results for each state and then verify that your state went from A to B. This would test your test in my view.

Is all this really needed?; depends on how much assurance you want and what degree of trust. In our case it was running our profiles on many many system that really found most of the bugs and verifiy that we get an expected threshold and summary of profile results in the two known states usually validated what we were doing 'good enough' and the corner cases 'would out' the more we used the tests in uncontrolled environments - aka other peoples systems - because in the end even our own testing environment had our own bias baked in :slight_smile:

Dear @aaronlippold,

Thank you very much for taking the time and explaining all this. I really appreciate this!