I used Siege for the testing of the "Sutton" server, and using the same code that gives us this sort of response:
|Document Complete||Fully Loaded|
|Load Time||First Byte||Start Render||DOM Elements||Time||Requests||Bytes In||Time||Requests||Bytes In|
|First View||0.355s||0.321s||0.383s||31||0.355s||1||3 KB||0.597s||3||8 KB|
|Repeat View||0.279s||0.240s||0.259s||31||0.279s||1||3 KB||0.290s||2||3 KB|
I also added some caching on the server side, to stop it regenerating the content every time.
Using Siege from a large instance (a small instance could not generate enough requests) I got the following results:
$ siege -c 40 -t 20s -d0 http://wimbledon.chart.is
Lifting the server siege... done.
Transactions: 28178 hits
Availability: 100.00 %
Elapsed time: 19.78 secs
Data transferred: 68.07 MB
Response time: 0.03 secs
Transaction rate: 1424.57 trans/sec
Throughput: 3.44 MB/sec
Successful transactions: 28178
Failed transactions: 0
Longest transaction: 1.51
Shortest transaction: 0.00
So over 1,000 requests/second on the smallest AWS instance! the server is running at 100% CPU so some simple performance analysis could yield some significant improvements. Also some more effort around making the benchmark more trustworthy, especially with higher numbers of concurrent requests.