Skip to content

Meeting 2014 06 17

Lars Bergstrom edited this page Jun 16, 2014 · 1 revision

Agenda

  • Nag about making a quick weekly status update (larsberg)
    • Daily meeting / scrum (?) (abinader)
  • Discussion of the new PR validation and approval ("bors") process (larsberg)
  • Discuss possible new workflows for wpt (Manish, if bandwidth permits)
  • Embedding

Status updates

  • larsberg: I was going to first make a quick nag about weekly status updates. Less for me as manager, and more as something I can point to when people ask questions about who's working on what. It's just to help keep a handle since we're no longer just three people.
  • abinader: Related to weekly status update. Maybe have some kind of daily quick meeting where, in other projects people were writing a single phrase to keep it?
  • larsberg: I'm against putting more meetings on the schedule, not just to keep schedules free but because of timezones and overhead.
  • mbrubeck: What info do people need to do their jobs? I find the weekly status tool gives enough info. You can also update more often. I had a previous team who used http://teamstat.us/ that lets you write a "status: " in an IRC channel and it shows up on the page. Even that, the usage fell off when people realized nobody read it.
  • larsberg: abinader, is there anything specific we were missing?
  • abinader: Not really, just a broader idea.

Travis CI\

  • larsberg: It's about ready to land. Blocked on builders having issues, being offline. Biggest change is that Travis will automatically run on any change on master, and all new/changed pull requests. Right now with bors we put r+ on a commit, it gets run on auto branch and then merged. Travis doesn't rebase and re-test PRs when master changes. Do we want to update bors to do this, or just manually refresh the PR and then merge?
  • jdm: concerns about racing on several PRs? bors handles that right now.
  • larsberg: Yes, there's a chance that two people could merge PRs at the same time that hadn't been tested against each other, and break things by merging them.
  • manish: We could continue with merges to and tests there, but change to run the tests on Travis instead of buildbot. We can ask Travis to test PRs, the auto branch, and the master branch.
  • larsberg: Based on that concern, I'll check and see if the bors changes are easy to make. I'm also meeting with the founders of Travis tonight to make sure we have resources and escalation path.

Embedding

  • zmike: If people have seen, I have successfully landed a large amount of the embedding work into master, which means that people can go out and misrender chromium apps using servo. That's the first step. I put up a bunch of issue tickets in case anybody want to jump in. Right now, it's not very functional, which is expected because it's probably a multi-year project (huge chromium API). I'm unable to do a lot of work right now due to RSI, but larsberg and I have an amazing talk at LinuxCon NA on Servo and embedding, so hopefully it'll be in a better state by then (mid-August)
  • pcwalton: Awesome! Thanks for the work!
  • zmike: Thanks to the team, lars and jdm on the tcmalloc stuff especially.
  • what's next?
  • zmike: Still working on getting the two test apps (CEF simple and CEF client). Simple has a lot more work, but client will be easy after it. And then simple should make a very useable but basic Servo shell. There's still a lot of work to do. I have issue tickets on github. They're all prefixed with embedding, with the symbols that need to be implemented. I added a readme to the embedding crate for running and testing, though no idea if it works on OSX yet. For impl, it's mainly grunt work - looking at the CEF internals for string handling, parsing, etc. The more complex stuff will need servo rejiggering, but there's not much required to get the test apps running.

WPT

  • manish: Wanted to discuss the workflow. When should we update the manifest? And, are we going to upload the rendered tests somewhere? how will we use these, since most of them fail? Any ideas?
  • jdm: We can specify if they timeout or fail right now. Once we can run them automatically, they're as good as any other automated test. As we fix them, we can change the manifests. I haven't experimented, so ms2ger can comment there.
  • manish: I think we will have to change some of the internals so they don't fail on timeouts. Even if it's expected failure and it timeouts, it doesn't seem to return with the right return code. But we should be able to fix that...
  • jdm: I will defer to ms2ger here.
Clone this wiki locally