You can do pretty much the exact same thing. If you want this to work identically, the key is to not use 3rd party cookbooks and always re-use the same version of cookbooks when you update.
As soon as you introduce versioning for whatever reason then at some point a version solver needs to kick in to make sure all the dependencies are compatible with the constraints each cookbook specifies. For the workflow you describe, I think you might like to use policyfiles with a “monorepo” (monorepo == all your cookbooks in one git repo). The policyfiles part will solve your dependencies once and shove that in a JSON document which acts as a frozen manifest of cookbooks to use. You could have individual developers check in the json files which your cron job just pushes to the server, or your could
.gitignore then and have your cron job generate them with the latest versions of everything.
Note that if you only ever use cookbooks from your git repo and foreswear the use of versions, then you could skip all this and just run
knife cookbook upload in your cron job and things will be fine, though you may want to learn how to put feature branches in your cookbook code to test things out.
knife does a couple of things, one of which is to let you use the Chef Server API from your workstation. This can be a mixed blessing, as stuff like
knife ssh is pretty cool, but having individuals upload cookbooks from their workstations gets troublesome once you have more than 5 or so (number varies by team of course) people interacting with the system. You can also install
knife on a CI system and define more complex flows there so you’re not running all this stuff on the Chef Server box, which is pretty useful if you want to use something like virtualization or docker to do more in-depth testing before you push your code.