17:00:52 <nickm> #startmeeting network-team meeting, 18 Mar 2019
17:00:52 <MeetBot> Meeting started Mon Mar 18 17:00:52 2019 UTC.  The chair is nickm. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:52 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:00:54 <ahf> :-)
17:00:55 <ahf> o/
17:01:04 <nickm> ahf: thanks!
17:01:15 <nickm> ahf: it is not an easy monday for me, brainwise.  How is everybody else doing?
17:01:59 <nickm> per usual, the pad is at https://pad.riseup.net/p/tor-netteam-2019.1-keep
17:02:00 <ahf> :-) i hope your brain gets better as the day and week move forwards :-)
17:02:06 <nickm> thanks. me too :)
17:02:35 <ahf> it's this week + next week we are without gaba, right?
17:02:46 <nickm> yup!
17:02:57 <ahf> cool cool
17:03:23 <juga> o/
17:03:29 <nickm> juga: hi!
17:03:32 <nickm> catalyst, asn, mikeperry: ping :)
17:03:39 * catalyst is here
17:03:51 <nickm> hi catalyst !
17:04:48 <nickm> it's time to look at the roadmap kanban!
17:05:02 <nickm> per usual, please filter on your name and make sure you're actually working on the stuff that is marked as "in progress" for you
17:06:01 <nickm> catalyst: where do you think we are with #28226? I just put it back into needs_review
17:06:19 <mikeperry> nickm: yo
17:06:24 <nickm> hi mikeperry !
17:07:04 <catalyst> nickm: POC bootstrap hookup to pubsub has working unit tests, so i think we're nearly done
17:07:40 <nickm> catalyst: also, wrt roadmap, we're on #29210 together.  ISTR you had some ideas about that.  want to chat some time this week about next steps?  It was scheduled for march...
17:07:45 <nickm> catalyst: cool!
17:07:59 <nickm> catalyst: I'm eager to do some coding on it, but I don't want to jump in your way
17:08:34 <catalyst> nickm: sure! we can talk later this week
17:08:48 <nickm> great!
17:09:12 <nickm> anything else on the roadmap? anybody need help from anyone else on somtheing there?
17:09:15 <nickm> *something
17:09:44 <nickm> if not, our next standard step is review assignments
17:10:05 <nickm> I'm hoping to get another 0.4.0 alpha out around the end of the week -- it's been about a month since the last one
17:10:14 <nickm> so anything in 0.4.0 that we can review and merge will rock
17:10:40 <ahf> cool
17:10:54 <nickm> The TB people asked for #29357, so if we can get that one reviewed, it would rock
17:11:04 * ahf hope there wont be any windows issue in this one :-S
17:11:12 <nickm> asn: Hm, did review assignments not happen for this week?
17:11:38 <nickm> asn: please reach out if you'd like somebody else to help you with that while dgoulet is out
17:11:43 <juga> nickm: i asked asn to delay it a bit and i'm afraid i was too late to tell asn that was already fine
17:12:02 <nickm> hm, ok
17:12:16 <nickm> Everybody please remember to check review assignments once asn commits them to trac?
17:12:25 <nickm> especially whoever gets anything in 0.4.0.x-final
17:13:15 <nickm> rotations this week are: ahf on bug triage, catalyst on CI
17:13:22 <ahf> ack
17:14:05 <nickm> on announcements, we seem to have mostly old stuff, but it's worthwhile to remember that SBWS things need prompt review too
17:14:12 <nickm> I think the discussion thing is also from last week
17:15:01 <nickm> going down the list ... teor could use help with privcount stuff, and is blocked on intermittent CI failures
17:15:21 <nickm> If we can do anything about #29693 or #29500 or #29437, that would be good
17:16:42 <nickm> asn, mikeperry: do the circuitpadding / stochastic ones seem like something you could do a fix for, or talk to teor about what they need?
17:19:34 <nickm> I have a request too -- could everyone please have a look at 040-must, and see whether there is anything there that a) you could fix, or b) you could explain why we can afford to leave unfixed?  We're planned to release 0.4.0 stable in mid-april, and I'd like to be less late this time than last time
17:19:40 <mikeperry> the #29500 one is annoying. I think we need another flag or something. I also only barely understand how this could ever happen in a real scenario
17:19:43 * ahf looks
17:19:52 <mikeperry> it seems almost impossible
17:20:24 <nickm> mikeperry: maybe add the appropriate assertions that should never be hit, so that we can figure out why it would be happening sometime anyway?
17:20:34 <nickm> diagnosis is the next best thing to a fix
17:20:43 <nickm> mikeperry: also asn left a question for you in his "help with" section
17:21:10 * ahf takes #29136
17:21:16 <nickm> ahf: thanks!
17:21:17 <ahf> ... from dgoulet
17:22:35 <nickm> (also we seem to be out of topics on the pad.  Is there anythign else for this week's meeting?)
17:23:12 <catalyst> re stochastic failures?
17:23:27 <nickm> catalyst: what about them?
17:23:29 <catalyst> did we ever decide how low we should push the expected failure rate?
17:24:14 <nickm> It's a hard question; I think for CI at least we should have them to something very low.
17:24:50 <nickm> There are power/performance/false-positive tradeoffs, and I don't really know where we should set them...
17:25:00 <nickm> but I think for CI our false positive rate needs to be _much_ lower
17:25:15 <catalyst> hm, should CI run less stringent tests than `make check`? so we might expect developers and CI to run different sets of checks by default?
17:25:26 <ahf> could we have a "soft failure" mode for these tests for release tarballs?
17:25:50 <ahf> i'm thinking it's OK if these sometimes fail for us in CI when people are working on them, but it's not good if they fail for some of the gentoo users who are trying to 'emerge tor'
17:25:57 <nickm> yeah
17:25:58 <ahf> or maybe other source distro's that run tests
17:26:13 <catalyst> ahf: i think they're not OK to have in CI when they might show up in contributors' pull requests
17:26:45 <ahf> catalyst: right, i think it's bad they sometimes happen, but i think that bugs that people are aware of and is trying to solve are OK to see in CI since they are real issues
17:26:49 <nickm> I think one-in-a-million across all tests is a reasonable target for false positives that non-developers will encounter...
17:26:51 <ahf> so ideally they go away 8)
17:27:08 <nickm> but if we do dial down the false-positive rate, we'll need to make sure that we _are_ running these tests frequently with adequate power
17:27:29 <catalyst> run them out of a cron-based Travis build?
17:27:33 <nickm> also I pulled the 1e-6 number out of my hat, so it could be silly
17:27:40 <nickm> catalyst: that seems plausible
17:27:57 <catalyst> we did have a chat with Riastradh about this a while ago; not sure if we wrote up any of it
17:29:38 * catalyst could try to dig up notes on that to see if there's any hints
17:29:44 <nickm> that would rock
17:30:58 <nickm> we can also think about disabling them or dialing down the sensitivity in 0.4.0, and doing fancier stuff in 0.4.1
17:31:40 <catalyst> nickm: i think back in January you and ahf did some stress tests that showed spurious failures at about the expected frequency that Riastradh calculated?
17:31:59 <ahf> that wasn't this one though
17:32:06 <ahf> the other ones was very easy to reproduce
17:32:11 <ahf> like many times for each 1000 test runs
17:32:25 <ahf> this one didn't show on my windows machine both for 32-bit and64-bit where the tests was running for 24h
17:33:07 * nickm has asked Riastradh for permission to reprodue the discussion logs from that chat
17:33:08 <ahf> all the other recent windows timing/stocastic test issues have been very easy to reproduce
17:33:12 <nickm> (on #tor-dev just now)
17:33:15 <mikeperry> asn,nickm: I can chat about my understanding of the dormant mode stuff after the meeting.
17:33:25 <nickm> great
17:33:38 <catalyst> nickm: you're welcome to quote me from that chat too
17:33:54 <nickm> thanks!
17:34:09 <nickm> do we have anything else for this meeting?  If not, let's call it, and move the chat to #tor-dev?
17:34:36 <nickm> hearing nothing....
17:34:39 <nickm> thanks, everybody!
17:34:44 <nickm> I'll see you online!
17:34:45 <nickm> #endmeeting