Home » Blog » How play.wakanda.org is implemented on the Amazon cloud
How play.wakanda.org is implemented on the Amazon cloud
Uptime and speed are essential to Web applications, especially those with high volume. While play.wakanda.org is still a demo application in beta, we knew that it would be hit simultaneously with thousands of requests from users around the world. In order to ensure that we could keep up with demand, the app is hosted on Amazon’s cloud services.
Architecture at a glance
The application is stored on Amazon S3, with scaling via Amazon EC2, spread across three global regions (via Route 53) with elastic load balancing (ELB).
Amazon S3 (Simple Storage Service) stores the files that comprise Wakanda Server, the Wakanda solution, the application data, and all necessary scripts.
Amazon EC2 (Elastic Cloud Compute) is responsible for hosting instances of the application, as well as storing the disk images from which the instances are created.
At startup, the instance of the virtual machine on EC2 launches a bootscript, which pulls the stored elements from Amazon S3, performing an autorun to load the server, solution, and the data. An additional “Run Server” script ensures that the server is continuously available, including the ability to restart the server in the unlikely event of a crash.
For widespread global coverage, an instance is launched in three regions: Asia (physically hosted in Singapore), the US (Oregon) and Europe (Ireland). These settings are easily accessed – like everything else – by Amazon’s browser-based control panel. Command-line junkies can also use their preferred terminal to access Amazon’s parameters.
Assignment of the correct region for each user is handled by the Route 53 DNS application, which directs users to the respective region’s elastic load balancer. Within the ELB region, static load balancing happens via an Auto Scaling Group running scripts with parameters for scalability policy. These parameters determine at what level of usage new instances are launched, based upon overall application response time. The response time is measured by CloudWatch, wherein we can create alarms for load intensity (i.e. a total 5-second response time for all global requests), in order to alert a scale-up and then perform it, or a scale-down if necessary.
In our particular scenario, we first installed play.wakanda.org at 4D’s headquarters in Paris, then started using Route 53 with an instance in each zone. We then ran stress tests and implemented load balancing and autoscaling to distribute between 1 to 3 instances with elasticity, which has kept server response time at the right level.
See for yourself
So please, go ahead! Visit play.wakanda.org and hammer the server with your requests across the million data entities it contains. We think you’ll be pleased with both the speed and stability.
In the near future, we plan on publishing statistics, as well as displaying live usage stats within the application itself.