Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement some basic benchmarks #5

Open
zah opened this issue Jun 27, 2018 · 6 comments
Open

Implement some basic benchmarks #5

zah opened this issue Jun 27, 2018 · 6 comments

Comments

@zah
Copy link
Member

zah commented Jun 27, 2018

Your goal will be to create a benchmark testing asyncdispatch2 against the Nim's existing asyncdispatch module and the best competing frameworks available in other programming languages. A suitable choice would be implementing one of the TechEmperor famous benchmarks:

https://www.techempower.com/benchmarks/#section=data-r15&hw=ph&test=plaintext

In particular, the plaintext case was selected because it makes use of HTTP pipelining which pushes all the frameworks to their limit.

The expected delivery is working code for the test case with the new and the old asyncdispatch modules as well a set of helper scripts allowing us to re-run the tests later and compare the results against some of the best performing frameworks available for other languages.

@2vg
Copy link

2vg commented Jun 28, 2018

cool :)
I am looking forward to the benchmark.

Let's decide the rules of the benchmark.
Imitating TechEmpower's rules would result in the following.

  • The recommended URI is /plaintext.
  • The response content type must be set to text/plain.
  • The response body must be Hello, World!.
  • The response headers must include (Content-Length or Transfer-Encoding) and Server and Date.

and,
I saw the pull request of plaintext benchmark.
If other web frameworks also participate in the benchmark, it will be fair to include the parsing of the header etc.

Example request

GET /plaintext HTTP/1.1
Host: server
User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/20130501 Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
Cookie: uid=12345678901234567890; __utma=1.1234567890.1234567890.1234567890.1234567890.12; wd=2560x1600
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Connection: keep-alive

Example response

HTTP/1.1 200 OK
Content-Length: 13
Content-Type: text/plain; charset=UTF-8
Server: Example
Date: Wed, 17 Apr 2013 12:00:00 GMT

Hello, World!

@nitely
Copy link

nitely commented Jun 29, 2018

I won't be working on PR #4 anymore. So, someone else can take over it. Everything in TechEmpower is open source. They use wrk for the client [0] + a lua script to enable pipelining. The code for running each server/framework is there as well.

If other web frameworks also participate in the benchmark, it will be fair to include the parsing of the header etc.

Most of them just parse the Request-Line and the headers are parsed lazily (i.e: when accessing the headers for the first time, so they are not parsed in the benchmarks). Some of them will cache the response. So, there shouldn't be much of a difference from current benchmarks.

[0] https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/toolset/wrk

@2vg
Copy link

2vg commented Jun 29, 2018

Oh, that is sad, but if I have time I also want to help benchmark setup.

Most of them just parse the Request-Line and the headers are parsed lazily
that's true...

@pablanopete pablanopete added this to Bounty Open in Status Bounty Tracker Jul 3, 2018
@kdeme
Copy link
Contributor

kdeme commented Aug 23, 2018

I could give a go at this, but I would need some updated information on the current state and what is actually wanted.

I've ran then code from nitely and on the latest revision it does not work anymore (see discussion at #4) , however the code on the gist of cheatfate does work.

Next to that, what is actually wanted now? These basics tests, or to run them with the TechEmpower framework, or something else? And should this also be set up somewhere to keep running as part of CI?

Let me know...

@jangko
Copy link
Contributor

jangko commented Aug 23, 2018

Current state: I am working on this #9, perhaps tomorrow I will push a working prototype

@pablanopete pablanopete moved this from Bounty Open to Rebounty in Status Bounty Tracker Sep 11, 2018
@andytudhope andytudhope moved this from Rebounty to Nimbus - awaiting in Status Bounty Tracker Nov 21, 2018
@StatusSceptre
Copy link
Member

In order to be approved as an bounty-xl I will need the following:

  • Estimated time of work to be done
  • Required Experience level: (beg, interm, expert)
  • amount to be posted: (1000 - 2000) DAI
    • if this range is inappropriate, we can move it to a new label
  • accepted work criterion

approval from at least 2 core contributors to give this out as a bounty via a "thumbs up" on the requirements comment.

@StatusSceptre StatusSceptre added the awaiting-bounty-criteria Need more information for bounty approval label Apr 17, 2019
@rachelhamlin rachelhamlin moved this from Nimbus - awaiting to Work in progress in Status Bounty Tracker Sep 2, 2019
@rachelhamlin rachelhamlin moved this from Work in progress to bounty-awaiting-approval in Status Bounty Tracker Sep 2, 2019
cheatfate added a commit that referenced this issue Jan 21, 2021
cheatfate added a commit that referenced this issue Feb 10, 2021
cheatfate added a commit that referenced this issue Feb 10, 2021
zah pushed a commit that referenced this issue Feb 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status Bounty Tracker
  
bounty-awaiting-approval
Development

No branches or pull requests

6 participants