Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/8mi trainer #834

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Feature/8mi trainer #834

wants to merge 1 commit into from

Commits on Aug 15, 2021

  1. explore more modular refactoring

    WIP
    
    WIP: remap tagger and classifier onto Trainer
    
    oops fix issue with span F1 aggregation in Trainer
    
    adds support for other loss functions like KLDiv
    
    this is useful for cases like distillation where we
    can have soft targets.
    
    pass kwargs into target
    
    use forward function
    
    option whether to rm wrapper
    
    support overriding the train target
    
    This should fix multiworker mismatch on reload
    
    feelgood types
    
    fix first batch accum
    
    allow no early stopping
    
    global_step fix, clean examples, factor up
    
    more cleanup
    
    fix includes in addon
    
    rm dist code outside 8mi trainer, WIP dataset
    
    use native loaders via mead
    
    pseudo fix for showing examples
    
    fix default and backend arg in paired reader
    
    bye six + tmp working non-native LM loader
    
    add backend option
    
    LM is TF native
    
    fix test
    
    remove and simplify tf trainers and fix trim issue
    
    be a little tricky with TF native
    
    we cant switch it on with TF 1.x
    
    .
    
    explore more modular refactoring
    
    WIP
    
    WIP: remap tagger and classifier onto Trainer
    
    oops fix issue with span F1 aggregation in Trainer
    
    adds support for other loss functions like KLDiv
    
    this is useful for cases like distillation where we
    can have soft targets.
    
    pass kwargs into target
    
    use forward function
    
    option whether to rm wrapper
    
    support overriding the train target
    
    This should fix multiworker mismatch on reload
    
    feelgood types
    
    fix first batch accum
    
    allow no early stopping
    
    global_step fix, clean examples, factor up
    
    more cleanup
    
    fix includes in addon
    
    rm dist code outside 8mi trainer, WIP dataset
    
    use native loaders via mead
    
    pseudo fix for showing examples
    
    fix default and backend arg in paired reader
    
    bye six + tmp working non-native LM loader
    
    add backend option
    
    LM is TF native
    
    fix test
    
    remove and simplify tf trainers and fix trim issue
    
    .
    dpressel committed Aug 15, 2021
    Configuration menu
    Copy the full SHA
    0432595 View commit details
    Browse the repository at this point in the history