Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

multiwoz v22 is very slow #4446

Open
stephenroller opened this issue Mar 24, 2022 · 3 comments
Open

multiwoz v22 is very slow #4446

stephenroller opened this issue Mar 24, 2022 · 3 comments

Comments

@stephenroller
Copy link
Contributor

Bug description

It takes an extremely long time to load multiwoz v22. With the data already downloaded, the train set takes >200 seconds to get to display_data on my development machine.

There are two reasons for this.

We aren't lazy when loading TOD datasets

As far as I can tell, the TOD teachers load the full dataset into memory before enumerating. I believe this comes from this issue here:

episodes = list(self.setup_episodes(self.fold))

Note that we list all sets of episodes, so then in setup_data, we don't get DialogTeacher's benefits from the lazy generator:

for episode in self.generate_episodes():

Fixing this would make display_data fast, as the second issue would be unnecessary. However, it's complicated with the n_shot stuff.

We are very inefficient in looking up in the multiwoz database

multiwoz v22 has a ton of code to load the database so inform can be computed. After the database is loaded, we need to find entries corresponding to user requests.

We're spending some 92% of our time inside this method:

def _get_find_api_response(self, intent, raw_slots, sys_dialog_act):
"""
Get an API response out of the lookup databases.
"""

In particular, when we select from the database here:

find = self.dbs[domain]
for slot, values in slots.items():
if slot == "arriveby":
condition = find[slot] < values[0]
elif slot == "leaveat":
condition = find[slot] > values[0]
else:
condition = find[slot].isin(values)
find = find[condition]

The issue is we're doing a fully linear SELECT operation on line 203: we have to explicitly enumerate every row and see if it matches one of our options. We then do this for every slot and value to continuously select. 😱

To fix, we would need to build an index of (slot, value)->record_id and select from that (repeatedly reducing the set for the multiple conditions).

Alternatively: if we could just move all this into the build.py and cache it, then we would do it all once the first time the dataset is loaded, and have fast loading forever after.

@github-actions
Copy link

This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.

@wangxieric
Copy link

@stephenroller Any solution to address the above issue? Also suffering from this slow effect.

@wangxieric
Copy link

@mojtaba-komeili So this issue is addressed with updated code?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants