16:59:37 #startmeeting Network team meeting, 4th April 2022 16:59:37 Meeting started Mon Apr 4 16:59:37 2022 UTC. The chair is ahf. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:37 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:59:39 yoyoyo 16:59:41 hihihi 16:59:53 o/ 16:59:54 https://pad.riseup.net/p/tor-netteam-2022.1-keep is our pad 17:00:05 i am just gonna switch computer 17:00:16 o/ 17:00:49 how are you all doing with tickets in https://pad.riseup.net/p/tor-netteam-2022.1-keep ? 17:01:30 wrong link? :p 17:01:47 ugh, that is the whole thing with switching computers that have different copy/paste means 17:01:55 https://gitlab.torproject.org/groups/tpo/core/-/boards 17:02:04 * ahf does the qubes copy/paste handshake instead of the windows one 17:02:20 so far so good for me 17:02:51 ahf: btw, eta, dgoulet and I need to talk about congetion control in arti; is that a good thing to add as a discussion item? 17:03:15 feel free to, but a lot of the s61 people is not in this meeting because there was the s61 meeting eearlier today 17:03:32 I am here 17:03:48 dgoulet: how are releases doing? 17:04:15 so 17:04:38 we are planning a first release candidate this week, specifically on Thursday because I'll be afk on Friday 17:05:16 exciting! 17:05:16 so far, we have 1 fix in 047 and 1 open MR that might not end up in 047... so we are in a good place there 17:05:27 and likely a stable by end of month :) 17:05:28 that is about it 17:06:36 very nice 17:06:57 wiki page looks good for releases 17:07:15 ah yeah! 17:07:20 nothing to do with backports 17:07:24 I overhauled those couple weeks ago! 17:08:16 nice 17:08:51 don't think there is anything to add to tpo/core/team.. maybe we'll soon have a month where there is some time to spend on at least one of those items.. 17:08:54 mostly on me here 17:09:31 we have quite a few tickets assigned to triage-bot but it looks all related to things that are pending post-stable-release look 17:09:45 so if we release stable before end of april, then we should spend some time going over these in may 17:09:50 probably dgoulet and i that should dive in there 17:10:34 all incoming things from other teams look alright 17:10:36 ok! 17:11:11 i am probably going to do some extended easter next week, but where i am around for some of the meetings, so monday, tuesday, and wednesday i will be around but it will be a bit limited, and thursday+friday next week i am off 17:11:29 nickm: you wanna talk about CC in arti? 17:11:54 yeah; eta is interested in seeing what she can start here, so we hoped we could rope dgoulet/mikeperry into a conversation about first steps 17:12:00 indeed :) 17:12:04 sweeet 17:12:06 either at this meeting or perhaps on a call :) 17:12:41 yeah this sounds good. my rust is still awful, but I can def advise and make sure things like test vectors are useful 17:12:52 at the very least, I'd like to just get an idea of the basic principles so I can start to plan out what design we want 17:13:06 but enough to actually start a proper implementation would also be nifty 17:13:07 dgoulet would be great to pair with, esp for the flow control piece 17:13:46 yes, I would like to be pulled in this call :) 17:14:03 I can't go in depth like mikeperry on the CC algos but I think I can be useful either way 17:14:48 we have kept the proposal up to date and checked it over closely in each code review: https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/324-rtt-congestion-control.txt 17:15:05 eta: I think you've already read prop#324, but dgoulet and mikeperry can do a good job of explaining which parts are actually needed and which are more C-specific or more experimental. 17:15:05 aha! thanks, I'll go have a read of that 17:15:23 (some parts of that proposal are not necessary-to-implement, and some are C-only, IIUC.) 17:15:53 step 1 might be RTT estimation 17:15:56 I think the main question I have immediately is how much kernel-level data we'll need 17:16:18 didn't the previous algorithm (which I can't remember the name of for some reason) require asking the kernel for receive windows or something? 17:16:30 ah that is for cell scheduling 17:16:35 CC doesn't require kernel level info 17:16:40 yes, we are only using one of the BDP estimation algs (RTT estimation), and one of the CC algs (TOR_VEGAS) from there. I believe this is also documented in the proposal 17:16:46 but cell scheduling does? 17:17:03 KIST, that's what it was called 17:17:04 cell scheduling does as in KIST yes but that is different from congestion control 17:17:16 ahh right, I was a tad confused 17:17:23 eta: we did not use any non-standard APIs, though flow control could as an optimization 17:17:25 (I mean they are connected in a way but here I think we just want client side congestion control) 17:17:54 yeah; I'm more thinking that if I rework the circuit reactor again, I need to take both of these things into account 17:18:14 in fact, no socket APIs are used at all.. however there are some libevent abstractions of them that are relied upon 17:18:16 and we are implementing KIST in arti, yes? (or is that not decided yet) 17:18:22 mikeperry: excellent, thanks 17:19:08 eta: I think that's a Maybe. We're definitely doing "kist-lite" or something like it. 17:19:29 particularly congestion control needs to know when an OR connection blocks; and flow control has some heuristics derived from how libevent fires wrt sockets being ready to read/write 17:19:33 mikeperry: it's documented in the proposal, but the first 5-10 times I read the proposal I missed it. 17:20:18 nickm: excuse me for being monumentally dumb for a second: cell scheduling is "when should arti/tor give new cells to the kernel", right? 17:20:28 KIST client side is... questionable imo especially due to that "grace period" ... 17:21:01 eta: I think it's more like, "if there are multiple circuits that all want to write to a channel, and multiple channelse that all want to write to the network... which cell do we send next?" 17:21:08 *channels 17:21:12 nickm: right, okay, thanks 17:21:20 does that require coordination between channels *and* circuits? 17:21:32 sadly yeah 17:21:52 at least in c 17:22:02 hm, okay 17:22:18 to whta extent does cell scheduling interact with congestion control? 17:22:22 I guess not much, right 17:22:28 yeah... we prioritize cells over ALL possible channels 17:22:28 yeah, given a circuit, you need to know when the channel becomes unblocked or not, and you need to know if all the streams on it are blocked 17:22:35 for congestion control 17:23:01 cell scheduling is "what ready-to-send circuits should I send on"; CC is "how do I maximise usage of a stream / channel"; flow control is "how do I determine readiness to send"? 17:23:06 for flow control, its behavior is empiracally derived from how the kernel and libevent behave 17:23:30 good summary imo! :) 17:23:48 nice 17:24:07 eta: yeah. congestion control applies to circuits; flow control applies to streams 17:24:33 hmm, what piece is responsible for multiplexing streams on a single circuit? 17:24:33 ... wait 17:24:41 mikeperry: flow control is circuit base now no? 17:24:52 is that also part of cell scheduling? 17:25:32 (by "multiplexing" I mean "I have a bunch of streams with data, what order do I send them in", or "how do I manage multiple streams on a circuit competing for bandwidth") 17:25:38 I propose we do a call to go over all this? :D 17:25:43 eta: flow control, to an extent, iiuc 17:25:49 dgoulet: no, it is per-stream.. but it needs to examine the circuit to get the CC object in our implementation. that's just an implementation artifact tho 17:26:16 dgoulet: yeah, sure, that might be a better idea :) 17:26:24 eta: to the extent that flow control decides "on" or "off" for each stream. THen all the "on" streams get treated "fairly", whatver that means. :) 17:26:27 IIUC 17:26:28 mikeperry: I'm a bit confused because we _removed_ stream level SENDMEs :P 17:26:32 but yeah lets do a call 17:26:50 yeah flow control is what we do instead of stream level SENDMEs 17:26:59 is tomorrow a US public holiday? 17:27:15 (I can see an event with no name in the "public holidays" calendar, but that might be a glitch) 17:27:16 IMO the best place for eta to start would be the RTT estimation and estimated bandwidth-delay products. do others agree? 17:27:27 it's not a holiday asfaik 17:27:27 eta: I'm not aware of such a holiday :) 17:27:30 cool 17:27:37 so we could do a call then at some point 17:27:42 I'm fine with tomorrow for a call! 17:27:51 wfm; I'm free most of the day 17:28:01 except at 1500 and 2000 UTC 17:28:11 I am free except 18UTC 17:28:27 1600 UTC? 17:28:29 +1 17:28:42 wfm 17:28:43 (RTT estimation is a good place, yes. it will help wrap your head around how SENDMEs behave as acks) 17:28:57 you can take the thursday meeting bbb room 17:29:08 16UTC is fine, yah 17:29:28 eta: independently we should talk about reactor refactoring. I have some thoughts but I want to re-read the code to see if they're plausible 17:29:31 very nice. i will skip on this one, but it sounds like the right people are there 17:29:42 my main thought is that we probably do need custom futures stuff... 17:30:08 but we should see if we can refactor so that the custom futures stuff is mostly isolated from the rest of the logic. 17:30:10 nickm: do you think we need a call for that, or do you reckon we could chat about that over IRC? 17:30:17 * eta doesn't mind either way 17:30:31 up to you! Let's see whether we reach a meeting of minds on IRC, then move to bbb if not? 17:30:34 cool 17:30:38 that works 17:30:47 (do you think my general thought is plausible?) 17:30:49 do we want to do that before the congestion control meeting? 17:31:26 17:31:44 <+nickm> (do you think my general thought is plausible?) ← I certainly agree it's something to strive for; I don't know how feasible it'll turn out to be 17:31:27 IMO we don't have to, but it won't hurt. How about tomorrow morning once we're both around and awake? 17:31:31 cool, wfm 17:31:36 eta: ack; i'm in the same boat 17:31:50 very good 17:31:57 do we other things we wanna go over today? 17:32:23 * dgoulet is good 17:32:43 i'm fine :) 17:33:02 * eta is also good 17:34:20 let's call it then 17:34:26 thanks folks for the long version of this meeting! 17:34:31 #endmeeting