16:59:25 #startmeeting Network team meeting, 21 March 2022 16:59:25 Meeting started Mon Mar 21 16:59:25 2022 UTC. The chair is ahf. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:25 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:59:30 hello hello welcome 16:59:36 o/ 16:59:37 salutations! 16:59:39 o/ 16:59:44 afternoon 16:59:49 our pad is at https://pad.riseup.net/p/tor-netteam-2022.1-keep 16:59:59 o/ 17:00:08 think eta and dgoulet may miss the meeting today, so we can just get going 17:00:22 o/ 17:00:33 how are y'all doing with your boards https://gitlab.torproject.org/groups/tpo/core/-/boards ? 17:00:33 * eta is sitting in a park, having just gone on a cycle :p 17:00:50 IRC from the park. not bad 17:00:55 full but manageable 17:01:32 eta: very nice :-) 17:01:59 i don't spot anything off either 17:02:04 I seem to be accumulating tickets as I work. I *hope* this is because I'm refining the tasks rather than actually generating work faster than I dispose of it... 17:02:51 Diziet: i think that is a problem that happens a lot in tor. at some point most people in the team have had a moment where they say "i need to drop some tickets back as unassigned" to then either be picked up by others or by yourself later if it feels like it's accumulating too much 17:03:16 i closed or unassigned myself for some tickets the other week too in a batch because the list had grown with some stuff that wasn't reelevant anymore 17:03:20 I guess it's nice to have a target-rich environment. 17:03:47 ya 17:03:57 it is a bit why we do this board session here in the meeting 17:04:11 Anyway I don't think I have an actual *problem*. But thanks. 17:04:13 so if people have gotten something they need to shuffle around or drop we can do it here, but usually people do it outside of the meetings too 17:04:15 excellent 17:04:39 we are skipping release status, but it is my impression all the tor.git stuff is generally making progress and david was in C land for a lot of last week too 17:05:01 don't see anything unhandled incoming from other teams 17:05:21 no announcements or discussion items 17:05:34 i think we can move to s61 unless anybody have anything we need to dive into? 17:06:08 quick thing from me 17:06:19 I'm afk for ~1h on tuesday and ~1h on thursday for medical stuff 17:06:23 no conflicts with meetings 17:06:35 just be aware in case you need me to figure out in advance when I'm around :) 17:06:49 cool! thanks for heads up 17:06:54 I may be away earlish by my standards tomorrow, and may therefore miss the Tuesday 1700ish meeting 17:07:29 wait, we have none tomorrow, do we? we had the arti one last week 17:07:41 i remember because i was not able to speak much and i had a bit to say haha 17:07:55 Mayve I am confused and this is lossage from calendar transfer 17:08:18 i think so. tomorrow the team leads have some training we need to attend and there is some grant meeting, but that is all i have i think 17:08:32 My notes for tomorrow 1700 say "arti sync BBB" and "Not always??" so err IDK :-) 17:08:39 ahh hehe 17:08:47 I'm totally on top of my diary lol 17:08:49 they should be in NC now under "Network Team meetings" (the blue calendar) 17:08:56 mikeperry: wanna talk s61? 17:08:57 * Diziet looks 17:09:09 yeah ok 17:10:10 so I have been chasing another rabbithole wrt https://gitlab.torproject.org/tpo/core/tor/-/issues/40586. I have been trying to instrument shadow sims to tell us if circuits get used without congestion control 17:11:30 but it is kind of a mess. it is hard to write checks on edge_connection_t that cover real exit streams, and I may hav also messed up checking onion services as a result 17:11:57 the short answer is that according to perfclients, the performance of onion services with that fix is indeed better. so it improved things for them 17:12:13 but there may be rare cases with markovclients that still remain, or I may have just messed up the logging 17:12:42 jnewsome gave me a script to test markovclients with onions outside shadow, but I ended up fighting with the sim logging instead, hopefully can run his script today in a VM or sth 17:13:23 jnewsome: oh yeah, your chameleon cloud credits need to be renewed soon; did you ask about extending them past April? 17:13:45 is chameleon the place you run the GL runners on? 17:14:07 yeah. it is where we get ephemeral runners when we want to run many many sims at once 17:14:08 mikeperry: I think micah just renewed them 17:14:24 nice 17:15:07 I was hoping to talk with dgoulet as to his opinion if we should dig into this onion thing before the release, or just release. I guess since he is out, I will try to dig myself 17:15:48 wrt other stuff, it looks like hiro had some results from utilization and outliers, but it is still preliminary and not per-flag, and Rob had some comments on fixing it? 17:16:05 poke him later when he's around if it's something that may require relase conversations 17:16:52 yes rob made a comment and made me realize I had a obvious bug in the code. 17:17:05 I have now fixed that 17:19:24 anything else here? :-) 17:19:29 hiro: ok. do you think more sim boxes would help you? 17:19:57 right now I do not need extras, since I am chasing this log problem and probably should just do it outside shadow if I can 17:21:29 GeKo: also, do we have exits setting __AlwaysCongestionControl? I could also try those outside shadow, with tgen there too 17:21:55 mikeperry: our net-health one, d2d4 17:22:12 bot sure about others 17:22:13 will it exit to any IP with port 80/443? 17:22:26 yes 17:22:56 ok great. then perhaps I can just test with it 17:23:00 at least as far as exit configuration is concerned 17:24:23 GeKo,juga: I replied on the sbws Exit CC MR. it looks like that is going well afaict. anything else for sbws and network-health to discuss? 17:24:40 mikeperry: i just needed what you replied :) 17:24:50 so nothing else from my side so far 17:24:53 not necessarily. 17:24:56 regarding your https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/24#note_2783987 17:25:27 as i said in #tor-dev i made some graphs indicating that we might want to consider a shorter timeframe for considering overload than 72h 17:25:36 in particular cases at least 17:25:55 yeah, that may need a change to the metric-portal reporting 17:25:59 I guess that would be hiro? 17:26:04 however, before i have a final opinion i want to do the analysis in that ticket 17:26:10 ok 17:26:10 yeah 17:26:23 something to mull over i think 17:26:58 i finally did the analysis in https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/16 17:27:29 where i applied the different metrics/data sources we collected in the wiki 17:27:33 to that issue 17:27:41 nothing new showed up as a result i think 17:27:56 but i was good to just do the whole process at least once and refine things a bit 17:28:02 that's all from my side 17:28:27 ok 17:29:09 well I think that's it. hopefully this logging issue doesn't end up being another insanely endless rabbithole. getting tired of these :/ 17:29:41 \o/ 17:29:49 very good, crossing fingers the rabbithole won't be too deep here 17:29:57 anything else we wanna dive into today? 17:29:59 (slightly traumatized at the prospect of digging into this for 2 weeks only to find out it is some subtle flag issue wrt the log checks) 17:30:11 but oh well 17:30:30 i usually think that the bigger things that enters tor.git takes 1-2 releases before all the really annoying bugs are gone :-s 17:30:55 right. the problem is it is not clear if this is a bug or not, heh 17:31:14 right :-s 17:31:24 okay, let's call it today. talk to you all over the week o/ 17:31:30 #endmeeting