html sectionn 6 十周年怎么没办

   换肤
确定参加&&
可能参加&&
SECTION 6 六周年派对 全明星DJ MC大赛 & B-BOY 2VS2比赛 @
558){this.width=558;}">
门票50 RMB
我们以经过六年的时间,一直创造着本土纯正的hiphop文化,从来没有停止, 也曾有无数国内外知名的hiphop人士经过我们的舞台。我们也曾给无数的hiphop喜爱者留下过美好深刻的回忆,我们来自本地的东、南、西、北我们掌握着本地dj mc bboy graffiti这四大元素,这次我们再次集合了众多的hiphop人士来迎接我们六周年纪念,就在2010年5月29日晚上9:00。
因为我们一直赢得很多圈内人士即公司多年以来的大力支持,本次我们将会举行中国内地mc battle擂台赛 和bboy 2vs2的街头舞艺切磋,也为参赛赢取者准备了丰厚的经典奖品。
本次我们也云集众多的本地hiphop组合乐队及dj的表演,欢迎所有街头文化的喜爱者和我们一起参加这次盛大的晚会。
期待&&让我们共同度过这震动你想象的精彩夜晚!
MC报名电话:
BBOY报名电话:
DJs:Dj Jam,Dj Wesley,Dj Dr. J,Dj Liuming,Dj Wordy。
MCs:大卫、杰子、姬琦(京城三杰),贾伟,孟国栋,陈浩然(IN3),MC 大狗(武汉),MC Lil ray,MC Raph,MC Webber,MC Sbazzo with &Bad Blood&。
乐队:北京现场体验:各种各样的艺术家。
The Knutz:Jewell(键盘),Tim(贝斯),Mao Mao a.k.a Lucas (鼓),Tia Ray(主唱)。
街舞:5+5舞蹈工作室 & 紫禁城摇摆 (forbbiden City Rock)
京公网安备Please enable JavaScript.
We load results data from JSON files and use JavaScript to render the results as charts and data tables.
In the following tests, we have measured the performance of several web application platforms, full-stack frameworks, and micro-frameworks (collectively, "frameworks").
For more information, read the , , and .
Show filters panel
Showing all frameworks.
ClassificationDisable all
LanguageDisable all
PlatformDisable all
Application operating system Disable all
Front-end serverDisable all
Database-serverDisable all
Database operating system Disable all
Object-relational mapper (ORM) classificationDisable all
Implementation approachDisable all
FrameworkDisable all
Close filters panel
EnabledDisabled
Requirements summary
In this test, each request is processed by fetching a single row from a simple database table.
That row is then serialized as a JSON response.
Example response:
HTTP/1.1 200 OK
Content-Length: 32
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
{"id":3217,"randomNumber":2149}
For a more detailed description of the requirements, see the
Requirements summary
In this test, each request is processed by fetching multiple rows from a simple database table and serializing these rows as a JSON response.
The test is run multiple times: testing 1, 5, 10, 15, and 20 queries per request.
All tests are run at 256 concurrency.
Example response for 10 queries:
HTTP/1.1 200 OK
Content-Length: 315
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
[{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]
For a more detailed description of the requirements, see the
Requirements summary
This test exercises database writes.
Each request is processed by fetching multiple rows from a simple database table, converting the rows to in-memory objects, modifying one attribute of each object in memory, updating each associated row in the database individually, and then serializing the list of objects as a JSON response.
The test is run multiple times: testing 1, 5, 10, 15, and 20 updates per request.
Note that the number of statements per request is twice the number of updates since each update is paired with one query to fetch the object.
All tests are run at 256 concurrency.
The response is analogous to the multiple-query test.
Example response for 10 updates:
HTTP/1.1 200 OK
Content-Length: 315
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
[{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]
For a more detailed description of the requirements, see the
Requirements summary
In this test, the framework's ORM is used to fetch all rows from a database table containing an unknown number of Unix fortune cookie messages (the table has 12 rows, but the code cannot have foreknowledge of the table's size).
An additional fortune cookie message is inserted into the list at runtime and then the list is sorted by the message text.
Finally, the list is delivered to the client using a server-side HTML template.
The message text must be considered untrusted and properly escaped and the UTF-8 fortune messages must be rendered properly.
Whitespace is optional and may comply with the framework's best practices.
Example response:
HTTP/1.1 200 OK
Content-Length: 1196
Content-Type: text/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
&!DOCTYPE html&&html&&head&&title&Fortunes&/title&&/head&&body&&table&&tr&&th&id&/th&&th&message&/th&&/tr&&tr&&td&11&/td&&td&&script&alert(&This should not be displayed in a browser alert box.&);&/script&&/td&&/tr&&tr&&td&4&/td&&td&A bad random number generator: 1, 1, 1, 1, 1, 4.33e+67, 1, 1, 1&/td&&/tr&&tr&&td&5&/td&&td&A computer program does what you tell it to do, not what you want it to do.&/td&&/tr&&tr&&td&2&/td&&td&A computer scientist is someone who fixes things that aren&t broken.&/td&&/tr&&tr&&td&8&/td&&td&A list is only as strong as its weakest link. — Donald Knuth&/td&&/tr&&tr&&td&0&/td&&td&Additional fortune added at request time.&/td&&/tr&&tr&&td&3&/td&&td&After enough decimal places, nobody gives a damn.&/td&&/tr&&tr&&td&7&/td&&td&Any program that runs right is obsolete.&/td&&/tr&&tr&&td&10&/td&&td&Computers make very fast, very accurate mistakes.&/td&&/tr&&tr&&td&6&/td&&td&Emacs is a nice operating system, but I prefer UNIX. — Tom Christaensen&/td&&/tr&&tr&&td&9&/td&&td&Feature: A bug with seniority.&/td&&/tr&&tr&&td&1&/td&&td&fortune: No such file or directory&/td&&/tr&&tr&&td&12&/td&&td&フレームワークのベンチマーク&/td&&/tr&&/table&&/body&&/html&
For a more detailed description of the requirements, see the
Requirements summary
In this test, each response is a JSON serialization of a freshly-instantiated object that maps the key message to the value Hello, World!
Example response:
HTTP/1.1 200 OK
Content-Type: application/ charset=UTF-8
Content-Length: 28
Server: Example
Date: Wed, 17 Apr :00 GMT
{"message":"Hello, World!"}
For a more detailed description of the requirements, see the
Requirements summary
In this test, the framework responds with the simplest of responses: a "Hello, World" message rendered as plain text.
The size of the response is kept small so that gigabit Ethernet is not the limiting factor for all implementations.
HTTP pipelining is enabled and higher client-side concurrency levels are used for this test (see the "Data table" view).
Example response:
HTTP/1.1 200 OK
Content-Length: 15
Content-Type: text/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
Hello, World!
For a more detailed description of the requirements, see the
If you have any comments about this round, please post at the .
Results testing
Running these benchmarks in your own test environment?
You can visualize the results by copying and pasting the contents of your results.json file in the text box below.
Test duration:
Visualize results
Introduction
This is a performance comparison of many web application frameworks executing fundamental tasks such as JSON serialization, database access, and server-side template composition.
Each framework is operating in a realistic production configuration. Results are captured on Amazon EC2 and on physical hardware. The test implementations are largely community-contributed and all source is available at the .
Note: We're using the word "framework" loosely to refer to platforms, micro-frameworks, and full-stack frameworks.
In a , we published the results of comparing the performance of several web application frameworks executing simple but representative tasks: serializing JSON objects and querying databases.
Since then, community input has been tremendous.
We&speaking now for all contributors to the project&have been regularly updating the test implementations, expanding coverage, and capturing results in semi-regular updates that we call "rounds."
Ready to see the results of the latest round?
View the latest results from .
Or check out the .
Making improvements
We expect that all frameworks' tests could be improved with community input.
For that reason, we are extremely happy to receive
from fans of any framework.
We would like our tests for every framework to perform optimally, so we invite you to please join in.
What's to come
Feedback has been continuous and we plan to keep updating the project in several ways, such as:
Coverage of more frameworks.
Thanks to community contributions to-date, the number of frameworks covered has already grown quite large.
We're happy to add more if you submit a pull request.
Tests on more types of hardware.
Enhancements to this results web site.
Questions or comments
Check out the
section for answers to some common questions.
If you have other questions, comments, recommendations, criticisms, or any other form of feedback, please post at the .
Current and Previous Rounds
& Significant restructuring of the project's infrastructure, including re-organization of the project's directory structure and integration with
for rapid review of pull requests, and the addition of numerous frameworks.
& Thanks to the contribution of a 10-gigabit testing environment by , the network barrier that frustrated top-performing frameworks in previous rounds has been removed. The Dell R720xd servers in this new environment feature dual Xeon E5-2660 v2 processors and illustrate how the spectrum of frameworks scale to forty processor cores.
& Six more frameworks contributed by the community takes the total count to 90 frameworks and 230 permutations (variations of configuration).
Meanwhile, several implementations have been updated and the highest-performance platforms jockey for the top spot on each test's charts.
& After a several month hiatus, another large batch of frameworks have been added by the community.
Even after consolidating a few, Round 7 counts 84 frameworks and over 200 test permutations!
This round also was the first to use a community-review process.
Future rounds will see roughly one week of preview and review by the community prior to release to the public here.
& Still more tests were contributed by the developer community, bringing the number of frameworks to 74!
Round 6 also introduces an "plaintext" test type that exercises HTTP pipelining and higher client-side concurrency levels.
& The developer community comes through with the addition of ASP.NET tests ready to run on Windows.
This round is the first with Windows tests, and we seek assistance from Windows experts to apply additional tuning to bring the results to parity with the Linux tests.
Round 5 also introduces an "update" test type to exercise ORM and database writes.
& With 57 frameworks in the benchmark suite, we've added a filter control allowing you to narrow your view to only the frameworks you want to see.
Round 4 also introduces the "Fortune" test to exercise server-side templates and collections.
& We created this stand-alone site for comparing the results data captured across many web application frameworks.
Even more frameworks have been contributed by the community and the testing methodology was changed slightly thanks to enhancements to the testing tool named .
& In April, we published a follow-up blog entry named "Frameworks Round 2" where we incorporated changes suggested and contributed by the community.
& In a March 2013 blog entry, we published the results of comparing the performance of several web application frameworks executing simple but representative tasks: serializing JSON objects and querying databases.
The community reaction was terrific.
We are flattered by the volume of feedback.
We received dozens of comments, suggestions, questions, criticisms, and most importantly, GitHub pull requests at
we set up for this project.
Motivation
Choosing a web application framework involves evaluation of many factors. While comparatively easy to measure, performance is frequently given little consideration. We hope to help change that.
Application performance can be directly mapped to hosting dollars, and for companies both large and small, hosting costs can be a pain point. Weak performance can also cause premature and costly scale pain by requiring earlier optimization efforts and increased architectural complexity. Finally, slow applications yield poor user experience and may suffer penalties levied by search engines.
What if building an application on one framework meant that at the very best your hardware is suitable for one tenth as much load as it would be had you chosen a different framework? The differences aren't always that extreme, but in some cases, they might be. Especially with several modern high-performance frameworks offering respectable developer efficiency, it's worth knowing what you're getting into.
Terminology
We use the word framework loosely to refer to any HTTP stack—a full-stack framework, a micro-framework, or even a web platform such as Rack, Servlet, or plain PHP.
permutation
A combination of attributes that compose a full technology stack being tested (two examples: node.js paired with MongoDB and node.js paired with MySQL).
Some frameworks have seen many permutations contribu others only one or few.
One of the workloads we exercise, such as JSON serialization, single-query, multiple-query, fortunes, data updates, and plaintext.
An individual test is a measurement of the performance of a permutation's implementation of a test type.
For example, a test might be measuring Wicket paired with MySQL running the single-query test type.
implementation
Sometimes called "test implementations," these are the bodies of code and configuration created to test permutations according to .
These are frequently contributed by fans, advocates, or the maintainers of frameworks.
Together with the toolset, test implementations are the meat of this project.
A set of Python scripts that run our tests.
An execution of the benchmark toolset across the suite of test implementations, either in full or in part, in order to capture results for any purpose.
A capture of data from a run used by project participants to sanity-check prior to an official round.
A posting of "official" results on this web site.
This is mostly for ease of consumption by readers and good-spirited & healthy competitive bragging rights.
For in-depth analysis, we encourage you to examine the source code and run the tests on your own hardware.
Expected questions
We expect that you might have a bunch of questions.
Here are some that we're anticipating.
But please contact us if you have a question we're not dealing with here or just want to tell us we're doing it wrong.
Frameworks and configuration
"You call x a framework, but it's a platform."
See the terminology section above.
We are using the word "framework" loosely to refer to anything found on the spectrum ranging from full-stack frameworks, micro-frameworks, to platforms.
If it's used to build web applications, it probably qualifies.
That said, we understand that comparing a full-stack framework versus platforms or vice-versa is unusual.
We feel it's valuable to be able to compare these, for example to understand the performance overhead of additional abstraction.
You can use the filters in the results viewer to adjust the rows you see in the charts.
"You configured framework x incorrectly, and that explains the numbers you're seeing." Whoops! Please let us know how we can fix it, or submit a
pull request, so we can get it right.
"Why include this Gemini framework I've never heard of?" We have included our in-house Java web framework, Gemini, in our tests. We've done so because it's of interest to us. You can consider it a stand-in for any relatively lightweight minimal-locking Java framework. While we're proud of how it performs among the well-established field, this exercise is not about Gemini.
"Why don't you test framework X?" We'd love to, if we can find the time. Even better, craft the test implementation yourself and submit a
pull request so we can get it in there faster!
"Some frameworks use process- have you accounted for that?" Yes, we've attempted to use production-grade configuration settings for all frameworks, including those that rely on process-level concurrency.
For the EC2 tests, for example, such frameworks are configured to utilize the two virtual cores provided on an c3.large (in previous rounds, m1.large) instance.
For the i7 tests, they are configured to use the eight hyper-threading cores of our hardware's i7 CPUs.
"Have you enabled
for the PHP tests?" Yes, the PHP tests run with APC and PHP-FPM on nginx.
"Why are you using a (slightly) old version of framework X?" It's nothing personal! With so many frameworks we have a never-ending game of whack-a-mole. If you think an update will affect the results, please let us know (or better yet, submit a
pull request) and we'll get it updated!
"It's unfair and possibly even incorrect to compare X and Y!" It may be alarming at first to see the full results table, where one may evaluate fra MySQL vs P Go vs P ORM vs raw d and any number of other possibly irrational comparisons.
Many readers desire the ability to compare these and other permutations.
If you prefer to view an unpolluted subset, you may use the filters available at the top of the results page.
We believe that comparing frameworks with plausible and diverse technology stacks, despite the number of variables, is precisely the value of this project.
With sufficient time and effort, we hope to continuously broaden the test permutations.
But we recommend against ignoring the data on the basis of concerns about multi-variable comparisons.
Read more opinion on this at .
"If you are testing production deployments, why is logging disabled?" At present, we have elected to run tests with logging features disabled.
Although this is not consistent with production deployments, we avoid a few complications related to logging, most notably disk capacity and consistent granularity of logging across all test implementations.
In spot tests, we have not observed significant performance impact from logging when enabled.
If there is strong community consensus that logging is necessary, we will reconsider this.
"Tell me about the Windows configuration." We are very thankful to the community members who have contributed Windows tests.
In fact, nearly the entirety of the Windows configuration has been contributed by subject-matter experts from the community.
Thanks to their effort, we now have tests covering both Windows paired with Linux databases and Windows paired with Microsoft SQL Server.
As with all aspects of this project, we welcome continued input and tuning by other experts.
If you have advice on better tuning the Windows tests, please submit
issues or pull requests.
"Framework X has in-memory caching, why don't you use that?" In-memory caching, as provided by some frameworks, yields higher performance than repeatedly hitting a database, but isn't available in all frameworks, so we omitted in-memory caching from these tests. Cache tests are planned for later rounds.
"What about other caching approaches, then?" Remote-memory or near-memory caching, as provided by Memcached and similar solutions, also improves performance and we would like to conduct future tests simulating a more expensive query operation versus Memcached. However, curiously, in spot tests, some frameworks paired with Memcached were conspicuously slower than other frameworks directly querying the authoritative MySQL database (recognizing, of course, that MySQL had its entire data-set in its own memory cache). For simple "get row ID n" and "get all rows" style fetches, a fast framework paired with MySQL may be faster and easier to work with versus a slow framework paired with Memcached.
"Why doesn't your test include more substantial algorithmic work?" Great suggestion. We hope to in the future!
"What about
options such as Varnish?" We are expressly not using reverse proxies on this project.
There are other benchmark projects that evaluate the performance of reverse proxy software.
This project measures the performance of web applications in any scenario where requests reach the application server.
Given that objective, allowing the web application to avoid doing the work thanks to a reverse proxy would invalidate the results.
If it's difficult to conceptualize the value of measuring performance beyond the reverse proxy, imagine a scenario where every response provides user-specific and varying data.
It's also notable that some platforms respond with sufficient performance to potentially render a reverse proxy unnecessary.
"Do all the database tests use connection pooling?" Yes, our expectation is that all tests use connection pooling.
"How is each test run?" Each test is executed as follows:
Restart the database servers.
Start the platform and framework using their start-up mechanisms.
Run a 5-second primer at 8 client-concurrency to verify that the server is in fact running.
These results are not captured.
Run a 15-second warmup at 256 client-concurrency to allow lazy-initialization to execute and just-in-time compilation to run.
These results are not captured.
Run a 15-second captured test for each of the concurrency levels (or iteration counts) exercised by the test type.
Concurrency-variable test types are tested at 8, 16, 32, 64, 128, and 256 client-side concurrency.
The high-concurrency plaintext test type is tested at 256, 1,024, 4,096, and 16,384 client-side concurrency.
Stop the platform and framework.
"Hold on, 15 seconds is not enough to gather useful data."
This is a reasonable concern.
But in examining the data, we have seen no evidence that the results have changed by reducing the individual test durations from 60 seconds to 15 seconds.
The duration reduction was made necessary by the growing number of test permutations and a target that the full suite complete in less than one day.
With additional effort, we aim to build a continuously-running test environment that will pull the latest source and begin a new run as soon as a previous run completes.
When we have such an environment ready, we will be comfortable with multi-day execution times, so we plan to extend the duration of each test when that happens.
"Also, a 15-second warmup is not sufficient." On the contrary, we have not yet seen evidence suggesting that any additional warmup time is beneficial to any framework.
In fact, for frameworks based on JIT platforms such as the Java Virtual Machine (JVM), spot tests show that the JIT has even completed its work already after just the primer and before the warmup starts—the warmup (256-concurrency) and real 256-concurrency tests yield results that are separated only by test noise. However, as with test durations, we intend to increase the duration of the warmup when we have a continuously-running test environment.
Environment
"What is Wrk?" Although many web performance tests use ApacheBench from Apache to generate HTTP requests, we now use
for this project. ApacheBench remains a single-threaded tool, meaning that for higher-performance test scenarios, ApacheBench itself is a limiting factor. Wrk is a multithreaded tool that provides a similar function, allowing tests to run for a prescribed amount of time (rather than limited to a number of requests) and providing us result data including total requests completed and latency information.
"Doesn't benchmarking on Amazon EC2 invalidate the results?" Our opinion is that doing so confirms precisely what we're trying to test: performance of web applications within realistic production environments. Selecting EC2 as a platform also allows the tests to be readily verified by anyone interested in doing so. However, we've also executed tests on our Core i7 (Sandy Bridge) workstations running Ubuntu as a non-virtualized comparison. Doing so confirmed our suspicion that the ranked order and relative performance across frameworks is mostly consistent between EC2 and physical hardware. That is, while the EC2 instances were slower than the physical hardware, they were slower by roughly the same proportion across the spectrum of frameworks.
"Tell me about your physical hardware." For the tests we refer to as "i7" tests, we're using our office workstations.
These use Intel i7-2600K processors, making them a little antiquated, to be honest.
These are connected via an unmanaged low-cost gigabit Ethernet switch.
In previous rounds, we used a two-machine configuration where the load-generation and database role coexisted.
Although these two roles were not crowding one another out (neither role was starved for CPU time), as of Round 7, we are using a three-machine configuration for the physical hardware tests.
The machine roles are:
Application server, which hosts the application code and web server, where applicable.
Database server, which hosts the common databases.
Starting with Round 5, we equipped the database server with a Samsung 840 Pro SSD.
Load generator, which makes HTTP requests to the Application server via the Wrk load generation tool.
"What is Resin? Why aren't you using Tomcat for the Java frameworks?" Resin is a Java application server. The GPL version that we used for our tests is a relatively lightweight Servlet container. We tested on Tomcat as well but ultimately dropped Tomcat from our tests because Resin was slightly faster across all Servlet-based frameworks.
"Do you run any warmups before collecting results data?" Yes. See "how is each test run" above. Every test is preceded by a warmup and brief (several seconds) cooldown prior to gathering test data.
"I am about to start a new web how should I interpret these results?" Most importantly, recognize that performance data should be one part of your decision-making process.
High-performance web applications reduce hosting costs and improve user experience.
Additionally, recognize that while we have aimed to select test types that represent workloads that are common for web applications, nothing beats conducting performance tests yourself for the specific workload of your application.
In addition to performance, consider other requirements such as your language and your invested knowledge in one or more of the frameworks we' and the documentation and support provided by the framework's community.
Combined with an examination of , the results seen here should help you identify a platform and framework that is high-performance while still meeting your other requirements.
"Why are the leaderboards for JSON Serialization and Plaintext so different on EC2 versus i7?" Put briefly, for fast frameworks on our i7 physical hardware, the limiting factor for the JSON test is our gigabit E whereas on EC2, the limit is the CPU. Assuming proper response headers are provided, at approximately 200,000 non-pipelined and 550,000 pipelined responses per second and above, the network is saturated.
"Where did earlier rounds go?" To better capture HTTP errors reported by Wrk, we have restructured the format of our results.json file. The test tool changed at Round 2 and some framework IDs were changed at Round 3. As a result, the results.json for Rounds 1 and 2 would have required manual editing and we opted to simply remove the previous rounds from this site. You can still see those rounds at our blog: , .
"What does 'Did not complete' mean?" Starting with Round 9, we have added validation checks to confirm that implementations are behaving as we have specified in
of this site.
An implementation that does not return the correct results, bypasses some of the requirements, or even formats the results in a manner inconsistent with the requirements will be marked as "Did not complete."
We have solicited corrections from prior contributors and have attempted to address many of these, but it will take more time for all implementations to be correct.
If you are a project participant and your contribution is marked as "Did not complete," please help us resolve this by contacting us at the .
We may ultimately need a pull request from you, but we'd be happy to help you understand what specifically is triggering a validation error with your implementation.
Join the conversation
Post questions, comments, criticism, and suggestions on the .
Simulating production environments
For this project, we aimed to configure every framework according to the best practices for production deployments gleaned from documentation and popular community opinion. The goal is approximating a sensible production deployment as accurately as possible. We also want this project to be as transparent as possible, so we have posted our test suites on .
Environment details
: Dell R720xd dual
(40 HT cores) with 32 GB database servers equipped with SSDs in RAID; switched 10-gigabit Ethernet
i7: Sandy Bridge Core i7-2600K workstations with 8 GB memory (early 2011 vintage); database server equipped with Samsung 840 Pro SSD; switched gigabit Ethernet
EC2: Amazon EC2
instances (2 vCPU each); switched gigabit Ethernet (m1.large was used through Round 9)
(for JRuby)
with APC, PHP-FPM, nginx
12.04 64-bit
2012 64-bit
Servlet Container (GPL)
GitHub repository
All of the code used to produce the comparison of web frameworks seen here can be found in the project's . If you have any corrections or contributions, please submit a pull request!
Forum / mailing list
Join the conversation about this project on the .
Test requirements
We invite fans of frameworks and especially authors or maintainers of frameworks to join us in expanding the coverage of this project by implementing tests and contributing to the GitHub repository.
The following are specifications for each of the test types we have included to-date in this project.
D the specifications read quite verbose, but that's because they are specifications.
The implementations tend to be quite easy in practice.
This project is evolving and we will periodically add new test types.
As new test types are added, we encourage but do not require contributors of previous implementations to implement tests for the new test types.
Wholly new test implementations are also encouraged to include all test types but are not required to do so.
If you have limited time, we recommend you start with the easiest test types (1, 2, 3, and 6) and then continue beyond those as time permits.
General requirements
The following requirements apply to all test types below.
All test implementations should be production-grade.
The particulars of this will vary by framework and platform, but the general sentiment is that the code and configuration should be suitable for a production deployment.
The word should is used here because production-grade is our goal, but we don't want this to be a roadblock.
If you're submitting a new test and uncertain whether your code is production-grade, submit it anyway and then solicit input from other subject-matter experts.
All test implementations must disable all disk logging.
For many reasons, we expect all tests will run without writing logs to disk.
Most importantly, the volume of requests is sufficiently high to fill up disks even with only a single line written to disk per request.
Please disable all forms of disk logging.
We recommend but do not require disabling console logging as well.
Specific characters and character case matter.
Assume the client consuming your service's JSON responses will be using a case-sensitive language such as JavaScript.
In other words, if a test specifies that a map's key is id, use id.
Do not use Id or ID.
This strictness is required not only because it's sensible but also because our automated validation checks are picky.
Test type 1: JSON serialization
This test exercises the framework fundamentals including keep-alive support, request routing, request header parsing, object instantiation, JSON serialization, response header generation, and request count throughput.
Requirements
For each request, an object mapping the key message to Hello, World! must be instantiated.
The recommended URI is /json.
A JSON serializer must be used to convert the object to JSON.
The response text must be {"message":"Hello, World!"}, but white-space variations are acceptable.
The response content length should be approximately 28 bytes.
The response content type must be set to application/json.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at concurrency levels ranging from 8 to 256.
The request handler will be exercised using GET requests.
Example request
GET /json HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Type: application/ charset=UTF-8
Content-Length: 28
Server: Example
Date: Wed, 17 Apr :00 GMT
{"message":"Hello, World!"}
Test type 2: Single database query
This test exercises the framework's object-relational mapper (ORM), random number generator, database driver, and database connection pool.
Requirements
For every request, a single row from a World table must be retrieved from a database table.
The recommended URI is /db.
The schema for World is id (int, primary key) and randomNumber (int), except for MongoDB, wherein the identity column is _id, with the leading underscore.
The World table is known to contain 10,000 rows.
The row retrieved must be selected by its id using a random number generator (ids range from 1 to 10,000).
The row should be converted to an object using an object-relational mapping (ORM) tool.
Tests that do not use an ORM will be classified as "raw" meaning they use the platform's raw database connectivity.
The object (or database row, if an ORM is not used) must be serialized to JSON.
The response content length should be approximately 32 bytes.
The response content type must be set to application/json.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
Use of an in-memory cache of World objects or rows by the application is not permitted.
Use of prepared statements for SQL database tests (e.g., for MySQL) is encouraged but not required.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at concurrency levels ranging from 8 to 256.
The request handler will be exercised using GET requests.
Example request
GET /db HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Length: 32
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
{"id":3217,"randomNumber":2149}
Test type 3: Multiple database queries
This test is a variation of Test #2 and also uses the World table.
Multiple rows are fetched to more dramatically punish the database driver and connection pool.
At the highest queries-per-request tested (20), this test demonstrates all frameworks' convergence toward zero requests-per-second as database activity increases.
Requirements
For every request, an integer query string parameter named queries must be retrieved from the request.
The parameter specifies the number of database queries to execute in preparing the HTTP response (see below).
The recommended URI is /queries.
The queries parameter must be bounded to between 1 and 500.
If the parameter is missing, is not an integer, or is an integer less than 1, the value should be interpreted as 1; if greater than 500, the value should be interpreted as 500.
The request handler must retrieve a set of World objects, equal in count to the queries parameter, from the World database table.
Each row must be selected randomly in the same fashion as the single database query test (Test #2 above).
Since this test is designed to exercise multiple queries, each row must be selected individually by a query. It is not acceptable to retrieve all required rows using a SELECT ... WHERE id IN (...) clause.
Each World object must be added to a list or array.
The list or array must be serialized to JSON and sent as a response.
The response content type must be set to application/json.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
Use of an in-memory cache of World objects or rows by the application is not permitted.
Use of prepared statements for SQL database tests (e.g., for MySQL) is encouraged but not required.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at 256 concurrency only.
The request handler will be exercised with query counts of 1, 5, 10, 15, and 20.
The request handler will be exercised using GET requests.
Example request
GET /queries?queries=10 HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Length: 315
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
[{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]
Test type 4: Fortunes
This test exercises the ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding.
Requirements
The recommended URI is /fortunes.
A Fortune database table contains a dozen Unix-style fortune-cookie messages.
The schema for Fortune is id (int, primary key) and message (varchar), except for MongoDB, wherein the identity column is _id, with the leading underscore.
Using an ORM, all Fortune objects must be fetched from the Fortune table, and placed into a list data structure.
Tests that do not use an ORM will be classified as "raw" meaning they use the platform's raw database connectivity.
The list data structure must be a dynamic-size or equivalent and should not be dimensioned using foreknowledge of the row-count of the database table.
Within the scope of the request, a new Fortune object must be constructed and added to the list. This confirms that the data structure is dynamic-sized. The new fortune is not persi it is ephemeral for the scope of the request.
The new Fortune's message must be "Additional fortune added at request time."
The list of Fortune objects must be sorted by the order of the message field. No ORDER BY clause is permitted in the database query (ordering within the query would be of negligible value anyway since a newly instantiated Fortune is added to the list prior to sorting).
The sorted list must be provided to a server-side template and rendered to simple HTML (see below for minimum template). The resulting HTML table displays each Fortune's id number and message text.
This test does not include external assets (CSS, JavaScript); a later test type will include assets.
The HTML generated by the template must be sent as a response.
Be aware that the message text fields are stored as UTF-8 and one of the fortune cookie messages is in Japanese.
The resulting HTML must be delivered using UTF-8 encoding.
The Japanese fortune cookie message must be displayed correctly.
Be aware that at least one of the message text fields includes a &script& tag.
The server-side template must assume the message text cannot be trusted and must escape the message text properly.
The implementation is encouraged to use best practices for templates such as layout inheritence, separate header and footer files, and so on. However, this is not required. We request that implementations do not manage assets (JavaScript, CSS, images). We are deferring asset management until we can craft a more suitable test.
The response content type must be set to text/html.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
Use of an in-memory cache of Fortune objects or rows by the application is not permitted.
Use of prepared statements for SQL database tests (e.g., for MySQL) is encouraged but not required.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at concurrency levels ranging from 8 to 256.
The request handler will be exercised using GET requests.
Example request
GET /fortunes HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Length: 1196
Content-Type: text/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
&!DOCTYPE html&&html&&head&&title&Fortunes&/title&&/head&&body&&table&&tr&&th&id&/th&&th&message&/th&&/tr&&tr&&td&11&/td&&td&&script&alert(&This should not be displayed in a browser alert box.&);&/script&&/td&&/tr&&tr&&td&4&/td&&td&A bad random number generator: 1, 1, 1, 1, 1, 4.33e+67, 1, 1, 1&/td&&/tr&&tr&&td&5&/td&&td&A computer program does what you tell it to do, not what you want it to do.&/td&&/tr&&tr&&td&2&/td&&td&A computer scientist is someone who fixes things that aren&t broken.&/td&&/tr&&tr&&td&8&/td&&td&A list is only as strong as its weakest link. — Donald Knuth&/td&&/tr&&tr&&td&0&/td&&td&Additional fortune added at request time.&/td&&/tr&&tr&&td&3&/td&&td&After enough decimal places, nobody gives a damn.&/td&&/tr&&tr&&td&7&/td&&td&Any program that runs right is obsolete.&/td&&/tr&&tr&&td&10&/td&&td&Computers make very fast, very accurate mistakes.&/td&&/tr&&tr&&td&6&/td&&td&Emacs is a nice operating system, but I prefer UNIX. — Tom Christaensen&/td&&/tr&&tr&&td&9&/td&&td&Feature: A bug with seniority.&/td&&/tr&&tr&&td&1&/td&&td&fortune: No such file or directory&/td&&/tr&&tr&&td&12&/td&&td&フレームワークのベンチマーク&/td&&/tr&&/table&&/body&&/html&
Minimum template
Along with the example response above, the following
template illustrates the minimum requirements for the server-side template.
White-space can be optionally eliminated.
&!DOCTYPE html&
&head&&title&Fortunes&/title&&/head&
&tr&&th&id&/th&&th&message&/th&&/tr&
&tr&&td&{{id}}&/td&&td&{{message}}&/td&&/tr&
This test is a variation of Test #3 that exercises the ORM's persistence of objects and the database driver's performance at running UPDATE statements or similar.
The spirit of this test is to exercise a variable number of read-then-write style database operations.
Requirements
The recommended URI is /updates.
For every request, an integer query string parameter named queries must be retrieved from the request.
The parameter specifies the number of rows to fetch and update in preparing the HTTP response (see below).
The queries parameter must be bounded to between 1 and 500.
If the parameter is missing, is not an integer, or is an integer less than 1, the value should be interpreted as 1; if greater than 500, the value should be interpreted as 500.
The request handler must retrieve a set of World objects, equal in count to the queries parameter, from the World database table.
Each row must be selected randomly using one query in the same fashion as the single database query test (Test #2 above).
As with the read-only multiple-query test type (#3 above), use of IN clauses or similar means to consolidate multiple queries into one operation is not permitted.
At least the randomNumber field must be read from the database result set.
Each World object must have its randomNumber field updated to a new random integer between 1 and 10000.
Each World object must be persisted to the database with its new randomNumber value.
Use of batch updates is acceptable but not required.
Use of transactions is acceptable but not required.
If transactions are used, a transaction should only encapsulate a single iteration, composed of a single read and single write.
Transactions should not be used to consolidate multiple iterations into a single operation.
For raw tests (that is, tests without an ORM), each updated row must receive a unique new randomNumber value. It is not acceptable to change the randomNumber value of all rows to the same random number using an UPDATE ... WHERE id IN (...) clause.
Each World object must be added to a list or array.
The list or array must be serialized to JSON and sent as a response.
The response content type must be set to application/json.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
Use of an in-memory cache of World objects or rows by the application is not permitted.
Use of prepared statements for SQL database tests (e.g., for MySQL) is encouraged but not required.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at 256 concurrency only.
The request handler will be exercised with query counts of 1, 5, 10, 15, and 20.
The request handler will be exercised using GET requests.
Example request
GET /updates?queries=10 HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Length: 315
Content-Type: application/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
[{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]
Test type 6: Plaintext
This test is an exercise of the request-routing fundamentals only, designed to demonstrate the capacity of high-performance platforms in particular.
Requests will be sent using HTTP pipelining.
The response payload is still small, meaning good performance is still necessary in order to saturate the gigabit Ethernet of the test environment.
Requirements
The recommended URI is /plaintext.
The response content type must be set to text/plain.
The response body must be Hello, World!.
This test is not intended to exercise the allocation of memory or instantiation of objects.
Therefore it is acceptable but not required to re-use a single buffer for the response text (Hello, World).
However, the response must be fully composed from this and its headers within the scope of each request and it is not acceptable to store the entire payload of the response, headers inclusive, as a pre-rendered buffer.
The response headers must include either Content-Length or Transfer-Encoding.
The response headers must include Server and Date.
gzip compression is not permitted.
Server support for HTTP Keep-Alive is strongly encouraged but not required.
Server support for HTTP/1.1 pipelining .
Servers that do not support pipelining may be included but should downgrade gracefully.
If you are unsure about your server's behavior with pipelining, test with the
load generation tool used in our tests.
If HTTP Keep-Alive is enabled, no maximum Keep-Alive timeout is specified by this test.
The request handler will be exercised at 256, , and 16,384 concurrency.
The request handler will be exercised using GET requests.
Example request
GET /plaintext HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/ Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=; __utma=1.....12; wd=
Accept: text/html,application/xhtml+xml,application/q=0.9,*/*;q=0.8
Accept-Language: en-US,q=0.5
Connection: keep-alive
Example response
HTTP/1.1 200 OK
Content-Length: 15
Content-Type: text/ charset=UTF-8
Server: Example
Date: Wed, 17 Apr :00 GMT
Hello, World!}

我要回帖

更多关于 section 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信