Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uis-rnn can't work for long utterances dataset? #50

Open
wrongbattery opened this issue May 27, 2019 · 19 comments
Open

uis-rnn can't work for long utterances dataset? #50

wrongbattery opened this issue May 27, 2019 · 19 comments
Assignees
Labels
question Further information is requested

Comments

@wrongbattery
Copy link

wrongbattery commented May 27, 2019

Describe the question

In Diarization task, i train on AMI train-dev set and ICSI corpus , i test on AMI test set. Both datasets include audios of 3-5 speakers in 50-70 minutes. My d embedding trains on Voxceleb1,2 with EER = 4.55%. I train uirnn with window size .24ms, overlap 50%, segment size .4ms. The result is poor on both train and test set.
I also read all your code about uirnn, i don't understand 1> why do you split up the original utterances and concatenate them by speaker and then use that input for training? 2> Why doese the input ignore which audio the utterance belongs to, just merge all utterances in 1 single audio? .This process seems completely different to inference process and also reduce the capacity of using batch size if one speaker talk too much.
For 1 hour audio, the output has 20-30 speakers instead of 3-5 speakers no matter the smaller of crp_alpha is.

My background

Have I read the README.md file?

  • yes

Have I searched for similar questions from closed issues?

  • yes

Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

  • yes

Have I tried to find the answers in the reference Speaker Diarization with LSTM?

  • yes

Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

  • yes
@wrongbattery wrongbattery added the question Further information is requested label May 27, 2019
@wq2012
Copy link
Member

wq2012 commented May 27, 2019

Hi,

We haven't tested uis-rnn on AMI. We found the audio quality of this dataset not good enough so we didn't use it. About the poor performance on AMI, it's likely due to the nature of LSTM/GRU not being able to handle ultra long sequences.

I also read all your code about uirnn, i don't understand 1> why do you split up the original utterances and concatenate them by speaker and then use that input for training? 2> Why doese the input ignore which audio the utterance belongs to, just merge all utterances in 1 single audio? .This process seems completely different to inference process and also reduce the capacity of using batch size if one speaker talk too much.

@AnzCol I think you can answer these better than me.

@wrongbattery
Copy link
Author

wrongbattery commented May 27, 2019

Hi @wq2012 ,
Actually i have 2 more questions.

  1. If "the nature of LSTM/GRU not being able to handle ultra long sequences", did you try to use The Transformer for Sequence Generation part?
  2. In your paper, uirnn includes 3 steps: a)Speaker Change Detection, b)Speaker Assignment Process, c) Sequence Generation. Do you think step a,b are enough for diarization task or algorithm which basic idea is about cluster data using data order? what is the benefit of step c and is the loss function good enough to control what it can learn? Do you guarantee what distribution space it learns in training set similar to distribution space in test set?

@wq2012
Copy link
Member

wq2012 commented May 28, 2019

@wrongbattery

If "the nature of LSTM/GRU not being able to handle ultra long sequences", did you try to use The Transformer for Sequence Generation part?

I'm not familiar with that :(

In your paper, uirnn includes 3 steps: a)Speaker Change Detection, b)Speaker Assignment Process, c) Sequence Generation. Do you think step a,b are enough for diarization task or algorithm which basic idea is about cluster data using data order? what is the benefit of step c and is the loss function good enough to control what it can learn? Do you guarantee what distribution space it learns in training set similar to distribution space in test set?

Step c) is necessary to complete the probability distribution.
UIS-RNN is our initial effort to solve the problem of clustering sequential data in a supervised way. And it's definitely not necessarily the optimal approach. There are lots of directions for future improvements. If you feel skeptical about some setups in our experiments or codebase, you may have found a promising direction for future research :)

@wrongbattery
Copy link
Author

wrongbattery commented May 29, 2019

@wq2012

In your paper, uirnn includes 3 steps: a)Speaker Change Detection (SCD), b)Speaker Assignment Process (SAP), c) Sequence Generation
Step c) is necessary to complete the probability distribution

Yes, you use P(X,Y,Z), a generative approach. Other researches use discriminative approach P(Y|X) = P(Y|Z,X) * P(Z|X) = SAP * SCD. I think generative approach P(X,Y,Z) is nearly optimal when you can train it on extremely big dataset as Transformer based algorithms such as BERT, GPT2

UIS-RNN is our initial effort to solve the problem of clustering sequential data in a supervised way.

In unsupervised way, i found that your Spectral Cluster algorithm works quite good in many audios.

@wq2012
Copy link
Member

wq2012 commented May 29, 2019

Yes, you use P(X,Y,Z), a generative approach. Other researches use discriminative approach P(Y|X) = P(Y|Z,X) * P(Z|X) = SAP * SCD. I think generative approach P(X,Y,Z) is nearly optimal when you can train it on extremely big dataset as Transformer based algorithms such as BERT, GPT2

It's a good point. I think that's an interesting direction for future efforts.

In unsupervised way, i found that your Spectral Cluster algorithm works quite good in many audios.

Indeed, spectral clustering is by far the best unsupervised approach that we found. The only drawback is that it's a bit sensitive to its parameters. So we usually tune the parameters for specific domains that we want to deploy the system to.

@wq2012 wq2012 changed the title uirnn can't work for long utterances dataset? uis-rnn can't work for long utterances dataset? May 29, 2019
@wrongbattery
Copy link
Author

@wq2012
Can you send me any logs for your training uirnn model since i do not know my training correct or not? have you try to predict your model on AMI dataset or ICSI corpus?

@wq2012
Copy link
Member

wq2012 commented Aug 27, 2019

@wrongbattery Sorry I didn't keep any of those logs. But I can usually see the loss function decreasing and finally converging.

We never had any success on AMI dataset. The acoustic condition of AMI is really different from our typical training data for training Voice Activity Detector, speaker recognition model, and UIS-RNN. Specifically, the volume of AMI dataset is really low. VAD has super large false reject.

ICSI is a great dataset. I don't remember whether we tried to predict on it (very likely not). But we tried to train on it and predict on other datasets, and it worked pretty well.

@wrongbattery
Copy link
Author

this is my log file training on ICSI dataset. the loss just get stuck nearly around -750-> -720. I also implement based on your code allowing number of clusters as input, but the result on some youtube audios are no good. what is your max size of sequence length?
screenlog.txt

@wq2012
Copy link
Member

wq2012 commented Aug 28, 2019

@wrongbattery

It's weird that the loss becomes NAN at some point:

Iter: 39090  	Training Loss: -728.4636    
    Negative Log Likelihood: 108.5799	Sigma2 Prior: -837.4142	Regularization: 0.3708
Iter: 39100  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan

Not sure what is going on. I didn't try to run diarization experiments on YouTube data, since I don't have any well annotated YouTube datasets. But I've heard other teams complaining that diarization on YouTube is super difficult. Personally I haven't heard of any success stories on diarization with YouTube yet.

The experiments we carried out are mostly on audios <5 minutes.

@wrongbattery
Copy link
Author

I think your model converge quite fast after a few iterations. If we know the oracle number of speakers before hand, does spectral cluster far better than uis-rnn? Do you agree with this?
"for long conversation, uis-rnn explodes the number of predicted speakers."

@wq2012
Copy link
Member

wq2012 commented Aug 29, 2019

If we know the oracle number of speakers before hand, does spectral cluster far better than uis-rnn?

I don't know. We currently don't have a good implementation in uis-rnn to limit the number of speakers. We haven't tried much in this direction.

Also, if you know the number of speakers before hand, it is no longer the STANDARD speaker diarization problem. Comparing uis-rnn and spectral clustering in this case might not be very fair.

Besides, the performance of uis-rnn significantly depends on the quality of training data.

Do you agree with this?
"for long conversation, uis-rnn explodes the number of predicted speakers."

It could be true. But I wouldn't be too assertive to say this. Our current uis-rnn implementation is more of a prototype than a product. It's not mature. There are still lots of spaces to improve, and for other researchers to contribute.

@wrongbattery
Copy link
Author

Thanks. Your idea about each network for each speaker is quite interesting. i'm trying to solve online diarization problem on production environment

@BarCodeReader
Copy link

Describe the question

In Diarization task, i train on AMI train-dev set and ICSI corpus , i test on AMI test set. Both datasets include audios of 3-5 speakers in 50-70 minutes. My d embedding trains on Voxceleb1,2 with EER = 4.55%.

Hello,
May I know how many epoch you use to reach this EER 4.55%? I use 500 but it stuck at EER 18%
Really thanks

@wrongbattery
Copy link
Author

Dear @BarCodeReader,
actually, i use sgd momentum as my optimizer, i manually decrease the learning rate when the loss doesn't change much. My input training is 250ms utterance, that mean i randomly sample from the long utterance. Actually dataset is Voxceleb1,2 , vtck, librispeech, timit, some diarization dataset

@BarCodeReader
Copy link

BarCodeReader commented Sep 18, 2019

Hi @wrongbattery,
Thanks for the info..
250 ms...ok then i need to reduce my size...i think your input size is different from the paper, but i will give a try.
other than this, you set everything the same as the paper? like 64 speakers, 10 utterances for each...
really thanks for your reply.

@innarid
Copy link

innarid commented Nov 29, 2019

@wrongbattery ICSI is a great dataset. I don't remember whether we tried to predict on it (very likely not). But we tried to train on it and predict on other datasets, and it worked pretty well.

Hi! I've divided interviews from ICSI into approx 5 minute wavs and tried to use d-vectors from https://github.com/CorentinJ/Real-Time-Voice-Cloning for training uis-rnn. But I have the same problem: loss becomes Nan at some point. Can you tell me what kind of d-vectors have you used?

@wq2012
Copy link
Member

wq2012 commented Dec 2, 2019

Hi! I've divided interviews from ICSI into approx 5 minute wavs and tried to use d-vectors from https://github.com/CorentinJ/Real-Time-Voice-Cloning for training uis-rnn. But I have the same problem: loss becomes Nan at some point. Can you tell me what kind of d-vectors have you used?

@innarid
Please add some print/log commands in your code and share some logs with us, otherwise it's impossible for us to debug this. Specifically:

  • Which line produced the NaN?
  • Which operation produced the NaN?
  • What's the input of that operation?

@wrongbattery
Copy link
Author

@innarid
i use this one for D vector, but i customize it
https://github.com/HarryVolek/PyTorch_Speaker_Verification

@taquynhnga2001
Copy link

taquynhnga2001 commented Jun 29, 2021

@wq2012
I also met the loss becomes NaN issue, but from initial iterations. These are some first lines:

Iter: 0  	Training Loss: nan    
    Negative Log Likelihood: 112.0798	Sigma2 Prior: nan	Regularization: 0.0006
Iter: 10  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 20  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 30  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 40  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 50  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
  • I trained the model on a customised dataset to create train_sequences of 50 sequences, each sequence is a d-vector embeddings sequence embedded from a 3-minute audio recording, meaning 150 minutes altogether.
  • I trained with --train_iteration=1000 and -l=0.001

Even if I trained on the first 5 sequence of the train_sequences, the loss becomes nan from the first row:

Iter: 0  	Training Loss: nan    
    Negative Log Likelihood: 20.7001	Sigma2 Prior: nan	Regularization: 0.0006
Iter: 10  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 20  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan
Iter: 30  	Training Loss: nan    
    Negative Log Likelihood: nan	Sigma2 Prior: nan	Regularization: nan

Do you know how to solve the problem?

Update: Issue solved!
My d-vector embeddings contained some zero elements, that might be the reason of nan training loss. I added all elements with an insignificant bias of 1e-6 and it's ok now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants