16:00:21 <shelikhoo> #startmeeting tor anti-censorship meeting
16:00:21 <shelikhoo> here is our meeting pad: https://pad.riseup.net/p/r.9574e996bb9c0266213d38b91b56c469
16:00:21 <shelikhoo> editable link available on request
16:00:21 <MeetBot> Meeting started Thu Nov 28 16:00:21 2024 UTC.  The chair is shelikhoo. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:21 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
16:00:23 <shelikhoo> hi~hi~
16:00:29 <cohosh> hi
16:00:47 <meskio> hello
16:01:35 <WofWca[m]> 👋
16:03:25 <shelikhoo> let's start with first topic: Broker Transition Anomaly
16:03:34 <shelikhoo> there is 3 observed issue
16:03:58 <shelikhoo> 1. reduced amount of proxies
16:04:06 <shelikhoo> 2. 5xx errors
16:04:20 <shelikhoo> 3. metrics issue
16:05:00 <shelikhoo> I believe 1 and 2 are connected issue: because of nginx's configuration issue and DNS update delay
16:05:48 <shelikhoo> there was a difficulty for proxies to test nat type and connect with broker
16:06:01 <shelikhoo> I have fixed some of them and is monitoring the issues
16:06:39 <shelikhoo> as for 3 metrics issue, the grafana was restarted
16:07:11 <shelikhoo> and X-forwarded-for header was added
16:07:14 <meskio> it was prometheus that was restarted
16:07:30 <meskio> TPA told me that we could have solved it by restarting the nginx in the old broker
16:07:37 <shelikhoo> yes... sorry for the mistake...
16:07:44 <meskio> I added that note to the setup guide to remember the next time to do it
16:07:56 <shelikhoo> there is no nginx on the old broker
16:08:08 <meskio> ahh, then restarting the broker
16:08:29 <shelikhoo> we have both machine update and dns update and server setup update'
16:08:40 <shelikhoo> so there was quite a lot of issues observed
16:08:46 <cohosh> will restarting the old broker now cause the remaining proxies to switch then?
16:08:53 <shelikhoo> thanks cohosh, meskio and WofWca!
16:09:12 <WofWca[m]> 🫡
16:09:12 <meskio> cohosh: is there any reson to keep it up? could we just shut it down now?
16:09:21 <cohosh> yeah i suppose we should just shut it down
16:09:29 <shelikhoo> cohosh: I think we should forward the connection for a while
16:09:39 <shelikhoo> with a reduced server spec
16:09:43 <shelikhoo> if necessary
16:09:50 <cohosh> oh, it's set up to forward connections?
16:09:56 <meskio> that should be easy with some iptables magic
16:10:05 <shelikhoo> currently it is not setup to forward the connection
16:10:34 <shelikhoo> but it should be possible if absolutely necessary
16:10:47 <shelikhoo> otherwise we could just decommission it
16:11:54 <meskio> hopefully just restarting it will make most of the proxies switch, but yes forwarding the connection might be worth it for some period of time
16:13:03 <shelikhoo> yes, do we aware of any other issues with the new broker setup?
16:13:27 <cohosh> i still see a high proportion of 5XX errors on our CDN77 panel
16:13:32 <meskio> no, the only two that I think are still standing is the unrestricted proxies and the probetest
16:13:38 <meskio> and at least the proxies are improving
16:14:02 <cohosh> but fewer than before
16:15:02 <cohosh> hm, maybe we are close to normal levels of 5XX errors and this is a separate issue
16:15:04 <shelikhoo> cohosh: yes, I have made some changes to configurations by guessing, but I will need to turn on logging to have some info to work on
16:15:27 <cohosh> okay
16:16:25 <cohosh> otherwise, it seems to be working well :) good job
16:16:29 <shelikhoo> meskio: yes, I think there are some proxy that won't discard cached dns info unless we restart it
16:17:16 <shelikhoo> cohosh: thank!!! and thanks you as well...
16:17:48 <shelikhoo> but it would need to be the proxy operator that flush the dns result
16:18:51 <shelikhoo> but anyway, I think everything is stabilizing
16:19:03 <shelikhoo> anything else we would like to discuss on this topic?
16:19:06 <meskio> nice
16:19:13 <meskio> nothing from my side
16:19:22 <shelikhoo> and next topic is
16:19:23 <shelikhoo> Evaluate [the plan for "Multi-Pool Matching Support"](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40156#note_3132445)
16:19:32 <shelikhoo> from WofWca[m]
16:19:56 <WofWca[m]> Yeah if that's fine.
16:20:19 <WofWca[m]> I looked at the code and laid out my idea of how this can be implemented
16:20:57 <WofWca[m]> Again, not gonna lie, this is for my fork of Snowflake, and I'm hoping that I wouldn't have to maintain my broker at least.
16:21:16 <WofWca[m]> So I was hoping to move forward with implementing that functionality
16:21:33 <WofWca[m]> If that plan sounds alright, I'll try to make an MR
16:22:17 <WofWca[m]> But I don't want to push anyone into more maintenance, so if that's not something you all would like, just say so
16:22:24 * cohosh reads the comment on the issue
16:23:09 <WofWca[m]> Basically it would effectively act as several brokers running on different HTTP paths
16:24:04 <WofWca[m]> What I didn't consider in my comment though is that nginx config would have to be adjusted also...
16:24:32 <WofWca[m]> But I can also take a look to the best of my ability and say what needs to be changed about it.
16:24:55 <meskio> I would like to see dcf opinion on this, but looks like he is afk today
16:24:57 <WofWca[m]> > What I didn't consider in my comment though is that nginx config would have to be adjusted also...
16:24:57 <WofWca[m]> That is, to take new paths into account
16:25:15 <meskio> if I understood correctly from previous discussions we don't want to switch the current broker/network to a multi-pool system
16:25:57 <meskio> but people were ok with exploring that idea for another network
16:26:54 <WofWca[m]> I see
16:27:08 <meskio> so it will be nice to be add support to it, but I guess this is a complex big change that we might need to discuss what is the best design
16:27:56 <WofWca[m]> Ok, then I suppose let's wait for @dcf's comment?
16:28:04 <cohosh> my first instinct is that i would implement it more similarly to how we handle multiple NAT type pools rather than have separate paths
16:28:42 <cohosh> but i haven't really thought about it yet
16:29:14 <WofWca[m]> cohosh: Do you think shared metrics would make sense?
16:29:45 <WofWca[m]> I think it's better for them to be separate.
16:30:08 <shelikhoo> it is possible to add additional parameters to the prometheus
16:30:17 <cohosh> yep
16:30:27 <shelikhoo> snowflake_available_proxies{nat="restricted",type="badge"} 6
16:30:34 <shelikhoo> can be changed to
16:30:41 <shelikhoo> snowflake_available_proxies{nat="restricted",type="badge", pool="1222222"} 6
16:31:11 <shelikhoo> so that the data are segmented to different pool, but can be combined later
16:31:15 <cohosh> it's easier to deal with tags than exporting entirely different metrics
16:31:39 <WofWca[m]> IMO this is more complex than effectively running separate broker instances.
16:32:17 <WofWca[m]> What I propose would basically be a bunch of changes to the main.go file
16:32:30 <WofWca[m]> Without changing the internal logic of the broker
16:33:04 <WofWca[m]> That is why I though I could try to pitch it.
16:33:24 <cohosh> i see, could you effectively implement this with no broker changes and just a different reverse proxy set up?
16:34:06 <cohosh> so instead of the patch being broker-url/proxy/pattern, it could be broker-url/pattern/proxy
16:34:19 <cohosh> and then run multiple brokers behind the reverse proxy
16:34:42 <WofWca[m]> cohosh: Hmmmmm. Yes perhaps. I only learned about the fact that nginx is used recently.
16:35:01 <cohosh> we only started using nginx recently :)
16:35:26 <shelikhoo> nginx is a very recent thing...
16:35:31 <shelikhoo> X~X
16:36:11 <WofWca[m]> But IMO the two approaches: changing nginx and changing main.go should not be too much different effort-wise and maintenance-wise
16:37:07 <cohosh> even if the main.go patch is simple, i would still prefer to try it in nginx first
16:37:21 <shelikhoo> yes, these two design are similar as it would require sending poll request to more than one address
16:37:40 <shelikhoo> if there are a lot of pools, then it would require a lot of polling
16:37:43 <cohosh> WofWca[m]: you might also be interested in taking a look at https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/39 in light of this idea
16:38:07 <shelikhoo> rather a single poll to serve all pools
16:38:25 <cohosh> it might be that only some of the components need to have multiple instances running
16:38:27 <shelikhoo> but I understand your design is much much simpler
16:38:34 <cohosh> and if not, maybe we want to reconsider how we split things up
16:39:05 <WofWca[m]> shelikhoo: For my project there is probably gonna be just one pool with a very lax allowedRelayPattern
16:39:14 <shelikhoo> yes...
16:39:21 <cohosh> shelikhoo: that's a good point
16:40:21 <shelikhoo> yes, I think WofWca[m]'s plan works for your intent purpose
16:40:23 <WofWca[m]> cohosh: OK, I'll consider the nginx approach and comment my ideas
16:40:38 <shelikhoo> anything more on this topic?
16:40:45 <cohosh> WofWca[m]: great, thanks for sharing your progress with us!
16:41:11 <WofWca[m]> shelikhoo: Let's move on I suppose
16:41:37 <WofWca[m]> Thanks for dedicating your time for this everyone!
16:41:41 <shelikhoo> yes
16:42:08 <shelikhoo> and next topic is:
16:42:09 <shelikhoo> Snowflake blocking in Russia seems to have stopped
16:42:09 <meskio> nice work on the design WofWca[m]
16:42:21 <meskio> yeah \o/
16:42:22 <shelikhoo> yes! nice work!
16:42:37 <WofWca[m]> meskio: Thanks ❤️!
16:42:51 <shelikhoo> I have no idea whether the censorship will comeback,
16:43:07 <cohosh> yeah, i've been keeping on eye on our vantage point results
16:43:25 <cohosh> other than some difficulties after the broker upgrade, it's a lot more stable than before
16:43:28 <meskio> cohosh: you were exploring on how to set up alerts for those things, isn't it?
16:43:39 <cohosh> meskio: yeah i'm currently working on that
16:43:44 <meskio> nice
16:44:07 <cohosh> i'll use this censorship event as a guide
16:44:18 <shelikhoo> the vantage point in china is still snowing some issue
16:44:25 <shelikhoo> https://gitlab.torproject.org/tpo/anti-censorship/connectivity-measurement/bridgestatus/-/blob/main/recentResult_cnnext-2?ref_type=heads
16:44:49 <shelikhoo> as we can see there are some issue with finishing the bootstrapping
16:45:00 <cohosh> oof, yeah
16:45:12 <shelikhoo> with only 15% chance to finish bootstrap
16:45:41 <shelikhoo> I will try to see if I can do some analysis on it
16:45:42 <cohosh> is that new? i know it's always been slower to bootstrap there
16:45:52 <cohosh> maybe because of the high packet loss rate
16:45:53 <shelikhoo> I don't think it is new
16:46:21 <shelikhoo> I will have a check of the packet capture to see exactly what happened
16:47:13 <shelikhoo> but I think it is also possible that there are censorship based on grabbing ip address from broker
16:47:45 <shelikhoo> so more research is needed
16:47:47 <shelikhoo> over
16:47:54 <cohosh> yeah that will be interesting to check, we had some evidence of that very early on when there were only tens of proxies
16:48:38 <shelikhoo> anything more we would like to discuss about this topic?
16:48:41 <cohosh> nothing more from me
16:49:06 <shelikhoo> anything else we would like to discuss in this meeting?
16:49:15 <meskio> not from me
16:49:39 <shelikhoo> okay I think we can end the meeting here
16:49:41 <shelikhoo> thanks!!!
16:49:43 <shelikhoo> #endmeeting