Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

very long generation time #23

Open
muhnashX opened this issue Oct 17, 2019 · 5 comments
Open

very long generation time #23

muhnashX opened this issue Oct 17, 2019 · 5 comments
Labels
feedback User comment improvement Improvement of an existing feature in progress This issue is being worked on

Comments

@muhnashX
Copy link

Thanks for putting time and effort in such an amazing project, Great work!

Currently i'm using Chatette in a project (Generation modifiers are awesome), one problem though, the generation takes too much time!

i'm generating about ~35K sentences

Statistics:

Parsed files: 29
Declared units: 80 (80 variations)
	Declared intents: 8 (8 variations)
	Declared slots: 17 (17 variations)
	Declared aliases: 55 (55 variations)
Parsed rules: 11031

generation takes about ~ 1 hour, while a clone to chatito takes about ~ 10 min

my question is:
what's the complexity of chatette? or what aspect of the statistics above, the generation time is directly propotional to?
(i noticed that chatette runs on a single core, so i've tried splitting the master file into 4 files and used ray to generate each master file on a separate worker.
with this i managed to get the generation time down to ~25 min but still quite an overhead)

if anyone have a thought or advice, i'd be grateful!

@SimGus
Copy link
Owner

SimGus commented Oct 17, 2019

This is weird, I used to generate ~27k sentences in around a few minutes. That being said, I didn't test performances since v1.4.2, and I made a large refactor which might have decreased performances. I'll check again on v1.6.0.

Would you mind answering a few questions so that I can have a better understanding of what is going on here?

  • I assume you are using a fairly normal computer but I prefer to ask, is your computer particularly slow?
  • Can you confirm you are using Chatette v1.6.0?
  • Which part of the process is especially slow: parsing, generation or writing the outut file(s)?
  • Are your template files particularly convoluted? By that I mean, do you often have very long chains of aliases used inside other aliases, used in other aliases, and so on? Also, do you use a lot of modifiers?
  • Could you measure whether it is the memory consumption or the CPU utilization that is the limiting factor? Basically, when you run Chatette, is most of your RAM being used or is the CPU core that it is executing on running at full throttle?

To answer your questions, you can see a rule as a tree, where each piece of content would be a node, and branches would be possible choices the generator can make.

For example, the template

%[intent]
    A single rule with an ~[alias] and a [choice|small choice]
    
~[alias]
    another rule with a [choice|small choice]
    a third rule

could be seen as
RuleTree

To generate an example, Chatette traverses such a tree. Therefore, several factors can increase generation time:

  • the depth of such a tree
  • the number of unit references/choices in the rules
  • the length of the rules

There are a few things I would suggest you do to try and decrease the execution time, aside from the obvious suggestion (e.g. closing all resource intensive application when running Chatette):

  • as I said before, try to keep your rule trees as shallow as possible, limiting the amount of unit references and choices you use in them
  • limit the amount of generation modifiers if you don't really need them as they add a small overhead during generation
  • if you can, split your files in separate files, and run Chatette separately on each of them to avoid regenerating a lot of examples when you only made changes to a handful of units. What I usually do is to have several master files which contain unrelated intents, and one file containing "common" aliases that are included in each master file.
  • you can maybe try using Chatette v1.5.0 and see if the performance is better. Be careful that some syntax was introduced in v1.6.0 and will not be recognized in previous versions (you can take a look at the changelog).
  • you can try using a faster python interpreter such as pypy
  • you can take a look at the code and try to make it more efficient
  • your way of parallelizing the execution using ray certainly is one of the best solution. Chatette obviously has a potential to execute in a multi-threaded way, I might add this in the future.

Of course, those suggestions are just workarounds, having Chatette execute for an hour for "only" 35k examples is not acceptable. Improving the performance of Chatette is the real solution here. I will do it but it will certainly take time.

On a side note, if you are using Chatette for machine learning tasks, be careful that you can easily have your models overfit with such a high number of examples.

I hope this helps and thank you for your kind message :)

@SimGus SimGus added the feedback User comment label Oct 17, 2019
@muhnashX
Copy link
Author

muhnashX commented Oct 21, 2019

Apologies for late reply, things came up and couldn't access the machine on which i'm experimenting.
First of all, Thank you for the detailed reply, and yes it helped!! (especially the second half)
regarding your questions:

I assume you are using a fairly normal computer but I prefer to ask, is your computer particularly slow?

Well, it's a Thinkpad with Core i7-8565U CPU and 8 GBs of RAM.

Can you confirm you are using Chatette v1.6.0?

Yes! i'm using version 1.6.0

Which part of the process is especially slow: parsing, generation or writing the outut file(s)?```

The generation i think, because the parsing is done quickly and i'm left with the prompt: [DBG] Generating training examples...

Are your template files particularly convoluted? By that I mean, do you often have very long chains
of aliases used inside other aliases, used in other aliases, and so on? Also, do you use a lot of modifiers?

Yes kind of, a typical intent definition is basically built of aliases, some of them are nested up to 4 levels.
i.e
[high_level_alias --> sub_alias_lvl1 --> sub_alias_lvl2 --> small_alias that contain slots]
i think here's some work that can be done (flatting and splitting things a bit)

Could you measure whether it is the memory consumption or the CPU utilization that is the limiting factor? Basically, when you run Chatette, is most of your RAM being used or is the CPU core that it is executing on running at full throttle?

i think it's CPU, here's a screenshot from htop when generating today:
72418144_1367214376785629_2517768549708070912_n
this run took about ~50min

@muhnashX
Copy link
Author

muhnashX commented Oct 21, 2019

What i'm going to do is to try to refactor my templates as much as i can, but without hurting the modularity and readability as much (chatette winning point). i would pay time in exchange for nicer syntax as it would be easier to maintain and update.
and regarding performance optimization of v1.6.0 it would be awesome if i can help!! but i lack the experience in that matter. that's why i went for dealing with it as a black box and try to multi process on a high level or use ray.

@SimGus
Copy link
Owner

SimGus commented Oct 22, 2019

Thanks for your replies :)

I think I see what the problem is: since v1.6.0, I've been trying to use caching during generation to improve generation time, but for big templates, it will use the whole RAM which makes it very slow. I assumed this would be a problem for much larger files but apparently I was wrong.

I'll add a way to disable that (or just turn it down) in the next update. It is likely it will take time as I don't have much time at the moment, so I hope you can cope with the current version.
If you can, trying the previous version (v1.5.2) could help I think.

Thanks again for the heads up!

@SimGus
Copy link
Owner

SimGus commented Nov 11, 2019

I just released a patch (v1.6.1) which tackles the issue temporarily: the problem came from the caching strategy which is way too aggressive. That patch disables caching when there are more than 50 units (aliases, slots and intents) declared.
I could record about a 4x increase in the execution time, for medium size templates (you can find my benchmarks here).

Of course, this is just a temporary solution, I plan to tackle the issue in a cleaner and more robust way in future releases. I will keep this issue open until I am satisfied with the caching strategy, and more generally the execution time.

@SimGus SimGus added improvement Improvement of an existing feature in progress This issue is being worked on labels Nov 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feedback User comment improvement Improvement of an existing feature in progress This issue is being worked on
Projects
None yet
Development

No branches or pull requests

2 participants