Here you will find ideas and code straight from the Software Development Team at SportsEngine. Our focus is on building great software products for the world of youth and amateur sports. We are fortunate to be able to combine our love of sports with our passion for writing code.
The SportsEngine application originated in 2006 as a single Ruby on Rails 1.2 application. Today the SportsEngine Platform is composed of more than 20 applications built on Rails and Node.js, forming a service oriented architecture that is poised to scale for the future.
How we save money through using AWS's Reserved Instances and our own AWS Auditor
Sport Ngin has a development workflow built on trust. It enables us to continuously deliver value to our customers by coding small. Our workflow empowers developers to do their jobs quickly and efficiently. We provide the right amount of structure to reduce risk without bogging developers down with too much process.
Data is useful; there’s no denying that. However, if there’s no way to measure it, is the data still valuable? Hubstats gives Sport Ngin’s Development team an easy way to measure, view, and record GitHub data.
At Sport Ngin we take downtime seriously. Ensuring that the platform is up and running at all times is imperative. Even seconds of downtime deploying an application is unacceptable. We've used a technique on our ruby apps for quite some time called a rolling deploy. The rolling deploy algorithm is great and we still use it for several of our applications. But as we've moved to a more Service Oriented Architecture rolling deploys left us wanting more.
On November 4th, the npm registry was unavailable for several hours. Leaving many developers without a way to obtain the node modules they need for development. And in some cases, the npm downtime kept people from deploying their applications.
Follow the steps outlined in our cookbook README to get an ElasticSearch cluster running in your OpsWorks stack.
We found we needed a Windows batch script that could create EBS snapshots for a specific volume and clean up the old ones. This is a gist of the script we ended up using to schedule our daily snapshots.
In the past, we've used fitter_happier for this purpose. Then, we added MongoDB to one of our applications. To check the connection to that database, we had to monkey-patch fitter_happier, which didn't sit well with us. We also wanted to check that our Resque queues weren't backed up, which necessitated further monkey-patching.