Here you will find ideas and code straight from the Software Development Team at SportsEngine. Our focus is on building great software products for the world of youth and amateur sports. We are fortunate to be able to combine our love of sports with our passion for writing code.
The SportsEngine application originated in 2006 as a single Ruby on Rails 1.2 application. Today the SportsEngine Platform is composed of more than 20 applications built on Rails and Node.js, forming a service oriented architecture that is poised to scale for the future.
Amazon Web Services announced a new generation of High CPU servers, the c3 family of servers, back in November during re:Invent. The spec sheet for these servers is very impressive as compared to the now old generation of High CPU servers. Twice the RAM, SSD drives, and a faster cpu. The cpu used is the 2.8 GHz Intel Xeon E5-2680v2 (Ivy Bridge) processor, which is the third fastest cpu as measured by cpubenchmark.net right now.
Specs may be impressive, but we were really interested to see how these servers performed against real production traffic. Back in December we were unable to launch one due to high demand. We tried again this week and were able to launch a c3.2xlarge, into our pool of 16 c1.xlarge servers on our Ngin Ruby on Rails application. Both the c3.2xlarge and the c1.xlarge have 8 cpus.
Via New Relic it immediately became apparent that the c3.2xlarge was 33% faster at processing Ngin requests on average than the c1.xlarge! Further use of New Relic made it obvious that the c3.2xlarge was barely utilizing the cpu. So we modified the weights on our haproxy load balancer configuration and sent twice the traffic to the c3.2xlarge as compared to a c1.xlarge. The c3.2xlarge 33% performance increase was maintained while serving up twice the traffic as compared to a c1.xlarge. Amazing! Especially considering the cost is basically the same, with the c3.2xlarge coming in at $0.60 an hour on demand vs. $0.58 for the c1.xlarge.
A c1.xlarge, showing an average response time of 368 ms while serving up 800 requests per minute.
A c3.2xlarge showing a 276 ms average response time while serving up 1600 requests per minute.
This is a simple example of the huge benefit that comes with using cloud computing such as AWS. Twice the throughput, 33% reduction in app response time, and the same cost! Rarely do we ever see such improvements to performance.