We had the public post-mortem as planned on Tuesday morning. I didn’t
think to screen capture the attendee list in Zoom, but I’d say there
were about 25 of us, mostly Chef employees.
I’ve completed the report with the immediate corrective actions we
identified here: https://gist.github.com/btm/641d3b0ec331ac34fbe9.
I’ve also cleaned up the Github Issue and we’ve been using it to track
regressions and fixes: https://github.com/chef/chef/issues/3107
If you’d like to watch the recording of the hour-long meeting, it will
be available here when it’s done processing:
The biggest difficulty for us was in the meeting avoiding discussing
and designing an ideal test infrastructure, rather than focusing on
what immediate corrective actions we could work on in the next couple
weeks. The available time of all is of course further limited by
ChefConf coming right up. One of the corrective actions was to hold an
Open Space or BoF (or both!) at ChefConf to continue the discussion.
At Chef, we’ve been using buildkite for some projects so we’re going
to look into using it to run some integration tests for Chef, which
would be more visible to everyone, like Travis and Appveyor. We
currently trigger testing on a wider number of platforms when we make
builds using Jenkins, and soon hope to have these automatically happen
with a git trigger. However, the results of those aren’t visible nor
are the triggered on PRs. So we’re hopeful about buildkite.
We’ve got another release coming out soon, within a week. I think
we’re “over the hump” with fixing 12.1.0 regressions and we’ll slow
down with releases a little bit to go back to finishing up the new
Chef Client build cluster (Manhattan) which should make it easier to
release builds at a higher cadence.
I’d personally like to some discussion from contributors and
maintainers about what kind of testing we should be doing and when.
Should contributors be manually testing their code on multiple
platforms? Should maintainers? Should Chef when we release? Having a
huge test matrix of integration tests will be great, but we’ve all got
to write them. Should we be running common cookbooks, or specific
cookbooks in the chef repository that have high code coverage, e.g.
not just install a package, but test every action with a matrix of
attributes like source and version? How about both?
We’ve done pretty good a culturally agreeing we need tests with
regression fixes and new features, since we’ve never really built an
automated integration testing framework for Chef we’ve relied a lot on
manual testing. How much of that we do has varied greatly over time
and I think still varies from contributor to contributor quite a bit.
I think it would be helpful if we found a baseline and wrote it up.
Bryan McLellan | chef | engineering lead
© 206.607.7108 | (t) @btmspox | (www) http://chef.io