Skip to content

Tech Meeting Notes 2020 05 14

Erik Moeller edited this page May 14, 2020 · 1 revision

Notes from SecureDrop Tech Meeting, May 14, 2020

1.3.0 Release retrospective

What went well?

  • this was a pretty large release where overall very few issues were found in QA + 1
  • Caught a frustrating bug before release was final, due to manual QA
  • translations to 100% and merged on time
  • translation improvements (e.g. weblate upgrades) discussed for next sprint
  • critical bugs caught before they hit our users :)
  • Great to see new docs strategy live, will help to avoid a lot of problems where admins refer to out of date docs (including sometimes old version numbers in instructions) +1
  • Team is comfortable making difficult calls in the 11th hour—deferring release one day was the right decision
    • Next time around, consider if a 48 hour buffer is more appropriate

What can be improved?

  • timezone coverage for release operations, 3pm evaluation and handover

  • It'd sure be nice to have automated tests of the Tails logic to catch updater bugs earlier (+1)integration tests including new Ansible would have been needed to catch bug in question

    • Are there simple things we can do next time around to just test the updater we have earlier in the QA cycle?Yup, can test update with local tags by disabling signature checks
  • FOLLOWUP: Add local tag steps to standard QA plan (would enable us to catch the bug we caught putting up the first tag).

  • FOLLOWUP: Seeing a lot of corruption of the git repo itself in Tails? Is this a new issue in Tails 4.6? At least scan through upstream bug tracker.

  • Using git for software distribution on Tails is inviting problems. Should we package? (Would we have fewer problems if we packaged?)Or what if that will increase the the maintainance burden? (my personal vote is for package).

  • FOLLOWUP: Convo we need to have: Admin story in qubes or Tails?

  • automated server/acceptance testing (an idea that was kicked around) could also be used as a "pre-flight checker" to make sure servers are in a happy place (for example, to address the iptables issue filed recently, and so on); would save time in QA and benefit SD Either molecule, or we may want to look at the tool we used for Fedora cloud automated testing. Anything easy will do.

  • FOLLOWUP: consider next release test plan consider per hardware duplication of acceptance testing

  • package building (including builder updates, reproducability) is error prone at release-time (see https://github.com/freedomofpress/securedrop/issues/4533)

    • +1, suggest auto-apt-update inside the builder then tag image
    • +1 to reproducible builds: if we could trust CI to build, and have a maintainer verify checksums from a local build, there'd be less work ferrying around packages +1, we will have to enable our pip mirror on the securedrop debian package for the same.
  • FOLLOWUP: track discussion in https://github.com/freedomofpress/securedrop/issues/4533

  • it feels like there's a lot of pressure on getting things done quickly (because there's so much to do in a release) which goes against a healthy learning/work enviornment+1

    • FOLLOWUP: unified comms channel in slack per release. Perhaps do in Wire, but regardless we should do this earlier in the cycle for QA and release related questions.
    • FOLLOWUP: Release checkboxes in ticket: establishing primary/secondary owners for release tasks.

What have we learned?

  • Have to do more communications with the actual translators (+ l10nlabs team). Maybe they will not be able to communicate the trouble they are having while working on the project.

    • Suggestions: be present during the localization lab meetings, presence in IFF mattermost.
  • FOLLOWUP: When updating the forum, note that we'll be in the IFF mattermost.

  • Team operates effectively even under stress. Lots of support and coverage for one another

  • we should document what is expected for creating a docs pr, i.e. we should include in the docs that there should be no diff result between the release branch and the docs pr - +1, what's required for review,

    • Question: Can we run all CI tests without the rebase portion which failed? (I know we don't care too much about tests passing in master given that it's docs-only.)
  • despite starting early on release day the release process was quite lengthy (wall-clock wise)

What is still a puzzle?

  • i think we learned that these merges into master might be more annoying for folks than expected+1+2+3
    • most of the frustration to my eye was no pre-existing standard procedure to point to
  • strategy of putting up the tag prior to build leads to a potential scenario where users check out the old tag and then we pull it down if/when we find an issue (this has happened a few times now)
Clone this wiki locally