Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing delay (sleep, Thread.sleep) in mruby? #1283

Closed
taosx opened this issue May 1, 2017 · 6 comments
Closed

Missing delay (sleep, Thread.sleep) in mruby? #1283

taosx opened this issue May 1, 2017 · 6 comments

Comments

@taosx
Copy link

taosx commented May 1, 2017

I wrote a mruby handler which uses http_request to keep my cache fresh in wordpress but because I don't know of any way to sleep the handler and it does too many requests it fails the request because php can't keep up.

Is there any way to sleep for some seconds in h2o using mruby (I tried installing custom Thread mruby extension but I can't seem to get date for response out of thread)?

The error:
[lib/handler/fastcgi.c] in request:/index.php/tag/science:connection failed:failed to connect to host

The code:

if request_is_from_self and links_file_exist and req_is_get
    links = `php #{links_filepath}`

    for link in links.split(' ') do
        req = http_request(link)
        _, _, _ = req.join
    end
end
@yannick
Copy link
Contributor

yannick commented May 2, 2017

hi @taosx
i somewhat doubt that its really php that can't keep up.
are you sending out http requests to the very own instance of h2o that then runs fastcgi?

not sure if you did this intentionally to emulate pauses but from what i read you're join for every request sequentally. The idea would be that the requests go out in parallel, so you only start joining after all requests are fired.

can you try something like

links.split(' ').map{|l| http_request }.to_a.map{|r| r.join}

and post a more detailed error description?

also if its always the same few links and they are more or less static you could quickly implement a cheap in memory cache.

@taosx
Copy link
Author

taosx commented May 2, 2017

I believe your code is not providing an argument to http_request, I modified like so:
links.split(' ').map{|l| http_request(l) }.map{|r| r.join}
but it still seems to perform worse that the code I posted above.
I solved the solution by installing and activating opcache to php and now the time it takes to do all requests dropped from 27 secs to 3.1 secs without errors. By using your code is going to 4 seconds.

After I do some more tests I would like to open source the whole code for my wordpress site with h2o which caches pages every 2 minutes, precompresses with both gzip and brottli at max settings and serves from cache. Love h2o so far, thank all the contributors to h2o!!

In the future I would love to see more scripting features for h2o, like openresty but based on h2o :D

@yannick
Copy link
Contributor

yannick commented May 2, 2017

@taosx ah sorry. i forgot the crucial part: .to_a

links.split(' ').map{|l| http_request }.to_a.map{|r| r.join}

just out of curiosity, can you try that again. and how many requests do you make / how big is the links array?

@taosx
Copy link
Author

taosx commented May 2, 2017

@yannick I tried again, i had to modify http_request to give it's argument (l).
The speed dropped to 3.06 seconds :D and sometimes 2.9+ when warmed the cache.
The links array has 41 elements (aws ec2 t2.micro).

Everytime I add another post i get 1 link from the post itself + ~5 links from tags added...
I think I have a small problem in the near future. I think I'll drop mruby for now and go for a different approach in the future.

Do you think I could use h2o with mruby as a reverse proxy cache for wp, I believe it's possible.

@yannick
Copy link
Contributor

yannick commented May 3, 2017

it seems that you shell out to php via links = php #{links_filepath}. that is likely very slow, did you remove that time?
for just h2o the t2.micro should be ok, but if you also run the php stuff on that machine they compete for the single cpu and performance drops further. so at least for testing i'd take something like a c4.large or c4.xlarge.

yes caching via mruby should be possible, either you do it in memory or you use redis (which currently still needs a patch but should be merged soon, see #1152 )

@kazuho
Copy link
Member

kazuho commented May 8, 2017

This is an interesting discussion!

Aside from how the issue should be resolved (e.g. by implementing a cache using mruby), I believe that there is no reason why we should not provide a sleep function in our mruby handler.

@kazuho kazuho added the mruby label May 8, 2017
@i110 i110 mentioned this issue Jun 16, 2017
@taosx taosx closed this as completed Apr 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants