OK. Things got very confused, I think, with the insistence that the internal Chef server has anything to do with the external Chef server. Since they're not consistent and never will be, treat them distinctly. They may have common recipes from common git repositories: that's an internal workflow you can deal with separately.
But for whichever system you want to be highly reliable and available 24x7, providing that via your own set of servers and backups and reloads and switchovers to the alternative, possibly newly built host, etc. is re-inventing the wheel. Those features are built into the commercial Chef HA setup, and you can save dozens if not hundreds of hours of your local development time just paying for that and setting it up with Chef's professional services.
If you want to do it yourself, well, you can spend the cycles to build this from scratch. It's important to remember that a few components of the server, such as the SSL certificates and private data bag encryption keys are not part of a normal Chef backup operation. And it's difficult to guess what components of your backups you're actually using or needing. If you're prepared to consistently synchronize node, environments, roles, cookbooks, etc. from a git repository and use an "infrastrucure as code" approach, it can be possible to avoid a great deal of the "tarball" requirement. Simply redeploy from the git repository, and apply the components that aren't manually loadable such as the host SSL sertificates and other private encryption keys.
How you access those backups..... depends on your backup tools. I like git repos for this content: it allows tracking changes far, far more easily, and tagging releases. It's especially useful for detecting when some smart !@#!@# decides to hand-edit the node, role, or environment attributes on the Chef server and tracking when it happened. I've encountered Chef maintainers who would manually do such operations and not put them in the relevant git repo, leading to enormous confusion when things worked in staging and not in production.
Where the git repo, or tarball, lives and is accessible to the exposed Chef server..... is not a Chef problem. It's a network space and security problem. With the right setup, you might even be able to use https://github.com/ or a more local git server to store this content, which has little benefit from being tarballed. It's typically not that bulky unless you are stuffing large binaries in your cookbooks or data bags somewhere. I do think you want to get if off the Chef server itself, in case the host gets completely taken out by some tragic error.
It's partly why I've pointed you to my nkadel-chef-local-wrapper tools. You can slap those on a separate test server, set up a git repo with your backups, and test the layouts locally without having to build up a new Chef server.