16:00:21 #startmeeting tor anti-censorship meeting 16:00:21 here is our meeting pad: https://pad.riseup.net/p/r.9574e996bb9c0266213d38b91b56c469 16:00:21 editable link available on request 16:00:21 Meeting started Thu Nov 28 16:00:21 2024 UTC. The chair is shelikhoo. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:00:23 hi~hi~ 16:00:29 hi 16:00:47 hello 16:01:35 👋 16:03:25 let's start with first topic: Broker Transition Anomaly 16:03:34 there is 3 observed issue 16:03:58 1. reduced amount of proxies 16:04:06 2. 5xx errors 16:04:20 3. metrics issue 16:05:00 I believe 1 and 2 are connected issue: because of nginx's configuration issue and DNS update delay 16:05:48 there was a difficulty for proxies to test nat type and connect with broker 16:06:01 I have fixed some of them and is monitoring the issues 16:06:39 as for 3 metrics issue, the grafana was restarted 16:07:11 and X-forwarded-for header was added 16:07:14 it was prometheus that was restarted 16:07:30 TPA told me that we could have solved it by restarting the nginx in the old broker 16:07:37 yes... sorry for the mistake... 16:07:44 I added that note to the setup guide to remember the next time to do it 16:07:56 there is no nginx on the old broker 16:08:08 ahh, then restarting the broker 16:08:29 we have both machine update and dns update and server setup update' 16:08:40 so there was quite a lot of issues observed 16:08:46 will restarting the old broker now cause the remaining proxies to switch then? 16:08:53 thanks cohosh, meskio and WofWca! 16:09:12 🫡 16:09:12 cohosh: is there any reson to keep it up? could we just shut it down now? 16:09:21 yeah i suppose we should just shut it down 16:09:29 cohosh: I think we should forward the connection for a while 16:09:39 with a reduced server spec 16:09:43 if necessary 16:09:50 oh, it's set up to forward connections? 16:09:56 that should be easy with some iptables magic 16:10:05 currently it is not setup to forward the connection 16:10:34 but it should be possible if absolutely necessary 16:10:47 otherwise we could just decommission it 16:11:54 hopefully just restarting it will make most of the proxies switch, but yes forwarding the connection might be worth it for some period of time 16:13:03 yes, do we aware of any other issues with the new broker setup? 16:13:27 i still see a high proportion of 5XX errors on our CDN77 panel 16:13:32 no, the only two that I think are still standing is the unrestricted proxies and the probetest 16:13:38 and at least the proxies are improving 16:14:02 but fewer than before 16:15:02 hm, maybe we are close to normal levels of 5XX errors and this is a separate issue 16:15:04 cohosh: yes, I have made some changes to configurations by guessing, but I will need to turn on logging to have some info to work on 16:15:27 okay 16:16:25 otherwise, it seems to be working well :) good job 16:16:29 meskio: yes, I think there are some proxy that won't discard cached dns info unless we restart it 16:17:16 cohosh: thank!!! and thanks you as well... 16:17:48 but it would need to be the proxy operator that flush the dns result 16:18:51 but anyway, I think everything is stabilizing 16:19:03 anything else we would like to discuss on this topic? 16:19:06 nice 16:19:13 nothing from my side 16:19:22 and next topic is 16:19:23 Evaluate [the plan for "Multi-Pool Matching Support"](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40156#note_3132445) 16:19:32 from WofWca[m] 16:19:56 Yeah if that's fine. 16:20:19 I looked at the code and laid out my idea of how this can be implemented 16:20:57 Again, not gonna lie, this is for my fork of Snowflake, and I'm hoping that I wouldn't have to maintain my broker at least. 16:21:16 So I was hoping to move forward with implementing that functionality 16:21:33 If that plan sounds alright, I'll try to make an MR 16:22:17 But I don't want to push anyone into more maintenance, so if that's not something you all would like, just say so 16:22:24 * cohosh reads the comment on the issue 16:23:09 Basically it would effectively act as several brokers running on different HTTP paths 16:24:04 What I didn't consider in my comment though is that nginx config would have to be adjusted also... 16:24:32 But I can also take a look to the best of my ability and say what needs to be changed about it. 16:24:55 I would like to see dcf opinion on this, but looks like he is afk today 16:24:57 > What I didn't consider in my comment though is that nginx config would have to be adjusted also... 16:24:57 That is, to take new paths into account 16:25:15 if I understood correctly from previous discussions we don't want to switch the current broker/network to a multi-pool system 16:25:57 but people were ok with exploring that idea for another network 16:26:54 I see 16:27:08 so it will be nice to be add support to it, but I guess this is a complex big change that we might need to discuss what is the best design 16:27:56 Ok, then I suppose let's wait for @dcf's comment? 16:28:04 my first instinct is that i would implement it more similarly to how we handle multiple NAT type pools rather than have separate paths 16:28:42 but i haven't really thought about it yet 16:29:14 cohosh: Do you think shared metrics would make sense? 16:29:45 I think it's better for them to be separate. 16:30:08 it is possible to add additional parameters to the prometheus 16:30:17 yep 16:30:27 snowflake_available_proxies{nat="restricted",type="badge"} 6 16:30:34 can be changed to 16:30:41 snowflake_available_proxies{nat="restricted",type="badge", pool="1222222"} 6 16:31:11 so that the data are segmented to different pool, but can be combined later 16:31:15 it's easier to deal with tags than exporting entirely different metrics 16:31:39 IMO this is more complex than effectively running separate broker instances. 16:32:17 What I propose would basically be a bunch of changes to the main.go file 16:32:30 Without changing the internal logic of the broker 16:33:04 That is why I though I could try to pitch it. 16:33:24 i see, could you effectively implement this with no broker changes and just a different reverse proxy set up? 16:34:06 so instead of the patch being broker-url/proxy/pattern, it could be broker-url/pattern/proxy 16:34:19 and then run multiple brokers behind the reverse proxy 16:34:42 cohosh: Hmmmmm. Yes perhaps. I only learned about the fact that nginx is used recently. 16:35:01 we only started using nginx recently :) 16:35:26 nginx is a very recent thing... 16:35:31 X~X 16:36:11 But IMO the two approaches: changing nginx and changing main.go should not be too much different effort-wise and maintenance-wise 16:37:07 even if the main.go patch is simple, i would still prefer to try it in nginx first 16:37:21 yes, these two design are similar as it would require sending poll request to more than one address 16:37:40 if there are a lot of pools, then it would require a lot of polling 16:37:43 WofWca[m]: you might also be interested in taking a look at https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/39 in light of this idea 16:38:07 rather a single poll to serve all pools 16:38:25 it might be that only some of the components need to have multiple instances running 16:38:27 but I understand your design is much much simpler 16:38:34 and if not, maybe we want to reconsider how we split things up 16:39:05 shelikhoo: For my project there is probably gonna be just one pool with a very lax allowedRelayPattern 16:39:14 yes... 16:39:21 shelikhoo: that's a good point 16:40:21 yes, I think WofWca[m]'s plan works for your intent purpose 16:40:23 cohosh: OK, I'll consider the nginx approach and comment my ideas 16:40:38 anything more on this topic? 16:40:45 WofWca[m]: great, thanks for sharing your progress with us! 16:41:11 shelikhoo: Let's move on I suppose 16:41:37 Thanks for dedicating your time for this everyone! 16:41:41 yes 16:42:08 and next topic is: 16:42:09 Snowflake blocking in Russia seems to have stopped 16:42:09 nice work on the design WofWca[m] 16:42:21 yeah \o/ 16:42:22 yes! nice work! 16:42:37 meskio: Thanks ❤️! 16:42:51 I have no idea whether the censorship will comeback, 16:43:07 yeah, i've been keeping on eye on our vantage point results 16:43:25 other than some difficulties after the broker upgrade, it's a lot more stable than before 16:43:28 cohosh: you were exploring on how to set up alerts for those things, isn't it? 16:43:39 meskio: yeah i'm currently working on that 16:43:44 nice 16:44:07 i'll use this censorship event as a guide 16:44:18 the vantage point in china is still snowing some issue 16:44:25 https://gitlab.torproject.org/tpo/anti-censorship/connectivity-measurement/bridgestatus/-/blob/main/recentResult_cnnext-2?ref_type=heads 16:44:49 as we can see there are some issue with finishing the bootstrapping 16:45:00 oof, yeah 16:45:12 with only 15% chance to finish bootstrap 16:45:41 I will try to see if I can do some analysis on it 16:45:42 is that new? i know it's always been slower to bootstrap there 16:45:52 maybe because of the high packet loss rate 16:45:53 I don't think it is new 16:46:21 I will have a check of the packet capture to see exactly what happened 16:47:13 but I think it is also possible that there are censorship based on grabbing ip address from broker 16:47:45 so more research is needed 16:47:47 over 16:47:54 yeah that will be interesting to check, we had some evidence of that very early on when there were only tens of proxies 16:48:38 anything more we would like to discuss about this topic? 16:48:41 nothing more from me 16:49:06 anything else we would like to discuss in this meeting? 16:49:15 not from me 16:49:39 okay I think we can end the meeting here 16:49:41 thanks!!! 16:49:43 #endmeeting