On Mar 13, 2013, at 2:24 PM, Fletcher Nichol firstname.lastname@example.org wrote:
On Tuesday, 12 March, 2013 at 6:23 PM, Scott M. Likens wrote:
No problem I created it https://github.com/opscode/test-kitchen/issues/68
On Mar 12, 2013, at 12:27 PM, AJ Christensen <email@example.com (mailto:firstname.lastname@example.org)> wrote:
I mentioned this to Fletcher on Twitter, he is aware of the situation!
Tickets would be good.
On 13 March 2013 06:48, Scott M. Likens <email@example.com (mailto:firstname.lastname@example.org)> wrote:
Unfortunately yes. I was able to reduce it to 1 GET (installation of bats,
minitest is installed via rubygems so it does not hit github) by modifying
neutering the installation script at
https://gist.github.com/damm/c7e89264f3130b2c62fc and modify
lib/kitchen/busser.rb to point to a different INSTALL_URL. With that small
change this reduces the problem greatly.
I imagine there are many ways to reduce this pain I’m just not sure of the
quickest and sanest path.
If there’s anyway I can help let me know
On Mar 11, 2013, at 9:50 PM, Daniel DeLeo <email@example.com (mailto:firstname.lastname@example.org)> wrote:
Do conditional GETs or HEAD requests count against the limit?
On Sunday, March 10, 2013 at 3:58 PM, Scott M. Likens wrote:
We’ve been having fun using the lxc driver for testing with test-kitchen
(which makes testing really fast!). Unfortunately I forgot that GitHub has
rate limiting imposed now. (60 requests were used up in less than 15
minutes (in part because LXC is so fast at creating and destroying guests
and booting new guests ))
slikens@app379:~/git/haproxy_lwrp# curl https://api.github.com/rate_limit -u
Enter host password for user ‘damm’:
slikens@app379:~/git/haproxy_lwrp# curl https://api.github.com/rate_limit
I believe a few api requests occur in
I don’t think the right answer is ‘can we ship Kitchen Busser with Omnibus
Could we move https://raw.github.com/opscode/kb/go to opscode.com (http://opscode.com) like we do
with the full installer? curl -L https://www.opscode.com/chef/kb.sh? with
that we could reduce the github api requests.
the basic error seen when getting this can be found
P.S. I can create an issue for this, I just wanted to start a conversation
about it and get a general idea of where we wanted to steer this before
Personally I think it’s awesome someone has bumped up against GitHub API throttling because their testing is going that fast. I was taking a look at this last night and there are 2 places where the git tag API calls are going to hurt us:
If we had a way to avoid these calls or ask some other endpoint (static or dynamic), we’d only be asking the GitHub file servers to download the correct tarballs. I’d like to avoid adding too many moving parts however it feels like at least one more something is called for here (barring some re-implementation of kb).
In my first implementation at reducing the GitHub API Requests I found reducing just the installation script was good enough to allow me to test without interruptions. However to scale this further beyond roughly 60 tests per hour (if you use bats, minitest does not incur an api request if i recall, which hints at some flexibility)
Currently I’m toying with a very small web app that can answer a simple question: what is the latest version-tagging git tag on a github repo? This could make use of conditional GETs (which don’t fall under throttling) and the responses could be cached using ETAG headers. The only real change to the kb code would be to call this API to get an answer and proceed to download off GitHub as usual. Essentially this app would mitigate the API throttling for everyone in one place.
I would say if we are trying to cache the sources, basically then let’s host all the source files for testing (including chef-full, kb, etc) as there might be someone who would want to do testing but cannot because their Firewall blocks Github (yes this number is still shockingly high).
So why not do conditional GETs and ETAG caching on our workstations and remote nodes? The problem is that kb lives on those freshly created nodes which have no past history to draw from. Their first call to the GitHub API will always be a cache miss. Worse is that in certain environments (like LXC containers, NAT’ed subnets with VM instances) the source IP to GitHub could all be the same anyway.
Anyway that is what I’d like to spend a bit more time on tonight to flesh it out. Longer term there could be a more clever solution, but for now let’s get everyone back to fast, throttle-free tests
My initial thought of implementing this was to stop hardcoding the kb installer and make it available in either .kitchen.local.yml or .kitchen.yml. Additionally from the evidence above, It’s possible to ship kb and it’s counterparts up to rubygems to help (this may be enough for people who no internet access to be able to test)
I hope that helps answer your question.
Scott M. Likens