18:00:24 #startmeeting anti-censorship team meeting 18:00:24 Meeting started Thu Dec 5 18:00:24 2019 UTC. The chair is phw. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic. 18:00:28 hi everyone! 18:00:30 hi 18:00:43 here's our meeting pad: https://pad.riseup.net/p/tor-censorship-2019-keep 18:01:32 we've got a big agenda today. let's get right to it and start with how we should pick snowflake stun servers 18:01:45 cohosh: can you cover this? 18:01:46 hi 18:02:01 sure 18:02:08 we discussed adding more default stun servers to snowflake 18:02:15 and i came up with a list of potential servers 18:02:28 but i'm concerned about the privacy issue here 18:02:34 and so are some users 18:03:19 we try to be careful with metrics logging to avoid correlating client ips to use of snowflake 18:03:42 this widens that attack surface (not saying google's servers are great for this either) 18:04:06 personally i think we could just implement #25591 instead but i'm curious about other thoughts on this 18:04:59 Hmm, I understood #25591 differently. 18:05:51 I thought that a STUN server needed to receive UDP packets; for example to tell the client its own NAT mapping which it can't find out on its own. Is that something that could happen in a domain-fronted TCP connection? 18:06:17 I thought that #25591 was just, the broker says: "you, client, use the STUN server stun.example.com" 18:06:34 ah i see 18:06:47 what i was thinking for #25591 is we basically wouldn't use stun 18:06:51 I could be wrong about what we need from STUN as well. 18:07:32 It doesn't necessarily have to be the broker either, right? Could be an independent service? 18:07:44 yes that's true 18:08:23 we could also run (or get volunteers to run) our own stun servers 18:08:54 these would be particularly susceptible to blocking though, no? 18:09:03 yeah 18:10:13 I think the privacy concerns with choosing specific STUN servers are valid. STUN is plaintext UDP though, so essentially the same information is availble to anyone the the path as well. 18:10:46 i like the idea of basically not using stun but i don't fully understand it :) cohosh, in #30579 you mention "We'll have to look into whether the domain fronting of the broker complicates this". does it make sense to spend a few hours understanding if this would work? 18:10:51 yeah i was hoping there could be a way to figure out NAT-punching information for the client over the domain fronted connection 18:11:36 maybe it's not possible 18:12:31 i think it's a good idea to spend just enough time to understand if it's possible (and worth doing) or not 18:12:43 i think the main thing we don't want is for stun servers to have logs of client ip addresses that provides some kind of evidence that these specific clients are using snowflake 18:13:24 ok cool, i can set aside some time to look at that then and we can add it to our roadmap 18:14:07 we could continue discussion on #25591 18:14:09 agreed wrt the logging issue 18:15:02 cohosh: thanks! anything else regarding stun or should we move on? 18:15:12 i'm good for now, thanks! 18:15:43 ok, next item is the iran shutdown. cohosh and i spent some time trying to understand what was happening, and looking for circumvention opportunities 18:16:24 note that it wasn't a complete shutdown. isps still allowed recursive dns requests for non-.ir domains 18:16:39 several people successfully used dns tunnels. 18:17:02 a dns-based pluggable transport would have been handy here, but tor's overhead may have made it very difficult to use. 18:17:17 It sounds like DNS over HTTPS would not have worked in this case. 18:17:55 dcf1: yes, i don't think so 18:18:14 BTW there was once a working prototype DNS pluggable transport by Irvin Zhan, https://trac.torproject.org/projects/tor/ticket/15213 18:18:23 for what it's worth, i once worked with a student who built a prototype of a dns-based PT. i contacted him and he re-uploaded his code to github. i forked it here: https://github.com/NullHypothesis/dnstun_pt 18:18:30 I get the impression it was never maintained and is somewhat abandoned. 18:18:31 he referred to his code as a "garbage fire" though :) 18:18:33 That's the one. 18:19:51 I have on my roadmap to do a transport with a turbo tunnel layer inside DNS, but I was planning to target DoH. 18:20:32 dcf1: oh, neat! either would be very useful. my hope is to pitch this to some students, and hopefully we'll find someone who can improve what we already have 18:20:33 dcf1: cool 18:22:03 other than that, i cannot really think of anything we could have done significantly better. it was very difficult to get a vantage point there and by the time we got access, we already mostly knew what we wanted to find out 18:22:22 besides, the government explicitly warned people that there would be consequences if it detected circumvention efforts 18:22:31 and a dns tunnel is rather trivial to detect 18:22:52 sounds like downloading the tor consensus was a problem for scenarious in which people did actually have some access 18:23:40 i suppose we could have asked these people to set 'PathsNeededToBuildCircuits 0.25' in their torrc 18:24:43 anyway, that's it from my side. just wanted to share this information, so we're all in the loop 18:25:07 let's move on to gettor, ok? 18:25:28 hiro: you around? 18:25:34 did any of you created a ticket anywhere about the gettor in dip problem? 18:25:45 gaba: i did :) 18:25:50 where? 18:26:02 which problem though, the PR problem or the dip repo update problem? 18:26:09 the PR problem in dip 18:26:19 oh, is it different from the repo update problem 18:26:30 yeah i made #32569 18:26:41 thanks 18:26:44 for the PR problem ahf asked me to email him 18:27:10 err i emailed gitlab-admin@tp.o 18:27:39 ahh yes 18:27:48 but in any case, i wanted to ask hiro what the current workflow for git lab merges is 18:28:20 I can't create a PR right now for #32480 18:28:29 but it will need a merge soon so we can update the database with new github links 18:29:07 i think hiro is the only one with permission to push to the main gettor repo but i'm not sure about that 18:29:18 ok. It seems that hiro may not be here. I can folllow up with her as we need to fix the gitlab issue on PRs very soon 18:29:25 and this other issue 18:29:26 ok sounds good 18:30:04 that's it from me on this topic then 18:30:09 thanks 18:30:39 next up is the seemingly new blocking of snowflake in china 18:31:03 yep, it seems like something is going on but i don't have data on what specifically yet 18:31:15 amiableclarity has been reporting issues on trac 18:31:56 and i ran a probe 100 snowflakes tests in both canada and china it looks like a lot more snowflakes fail from china (about 50% compared to 10%) 18:32:26 and there might also be issues one there is a connection where the data is getting dropped at a higher rate 18:32:33 I wondered if it's because of the higher poll rate of the proxy-go instances. They could all be blocked by blocking 1 IP address. 18:32:50 fwiw, not all of the gfw's blocking is based on rst segments. relays and bridges are blocked by dropping the syn-ack segment from the relay to the client, so the gfw should be able to drop udp packets as well 18:32:51 dcf1: hmm that's an interesting thought 18:33:08 ah thanks phw 18:33:11 that's useful info 18:34:22 thanks, i wanted to give a heads up on this issue 18:34:41 dcf1: i don't follow. what do you mean by they could all be blocked by blocking 1 addr? 18:34:53 I am here 18:34:58 hiro: hi! 18:35:01 let me check that 18:35:05 phw: all our proxy-go isntances are on the snowflake bridge 18:35:11 hi hiro! 18:35:20 cohosh: aah, gotcha! 18:35:23 There are 4 standalone proxy-go, all running on 1 IP address. We configured those standalone one to poll more frequently than the web-based ones, so they account for an outsized share of the effective proxy capacity. 18:35:47 thanks for explaining, dcf1 and cohosh 18:35:51 dcf1: i will run some tests to determine how many unique IPs are unreachable 18:36:11 and put the results in #32657 18:36:33 ok checked that, phw add push access to gettor and now cohosh too 18:36:48 hiro: thanks! 18:37:32 hiro: quick question while you're here: is it possible to give cohosh and me permissions to add/modify anti-censorship infrastructure in our nagios setup? 18:38:10 what do you mean? 18:38:17 create nagios checks? 18:38:24 or alerts? 18:39:03 not sure what the difference is :) for example, you set up a check (?) to monitor gettor-01. i'd like to be able to also add a check for, say, a new default bridge when we add one. ideally without having to email anyone, to reduce friction 18:39:15 uhm 18:39:23 I am not sure that might be possible 18:39:44 ok, no worries. in this case: is it a possibility to get a small VM on which we can run our own monitoring tool? 18:39:45 our nagios setup might be too much entangled with our puppet and our tpo infra 18:40:18 so if your bridge is running on tpo infra I can do that for you... and otherwise I am not sure we can 18:40:35 we have a prometheus that people can request what they need 18:40:42 default bridges are all run outside of tor infrastructure 18:41:05 and if you need some other monitoring tool you should request a VM with a ticket as you normally would 18:41:16 hiro: ok, i will, thanks! 18:41:17 requesting what you actually need 18:41:26 like what packages need to be installed and so on 18:43:08 ok, back to snowflake. since we talked about proxy-go: will scott is involved in emerald onion and asked if they can run any infrastructure for us 18:43:27 cool 18:43:34 so far, they're running exit relays but he offered to run whatever we need: default bridges, proxy-go instances, ... 18:43:43 how responsive are emerald onion? 18:43:55 i only know will and he's quite responsive 18:44:06 * Samdney arrives and is reading backlog.... 18:44:13 Samdney: hi! 18:44:16 hi :) 18:44:27 the proxy-go instances need to be updated occasionally since we're still making quite a few changes to snowfalke 18:44:44 we might want to run proxy-go instances on more ip addresses. in fact, i should start mine again 18:44:53 so as long as they're willing to do that, it would probably increase the usability of snowflake a bit 18:45:22 cohosh: that would entail pulling from master and re-deploying the code, right? 18:45:47 oh i know wil 18:45:54 phw: yes 18:46:25 i actually wonder if there's a good way to remind all proxy-go deployers to do this when we have updates 18:47:17 cohosh: alright, i'll forward this to him 18:47:24 okay thanks phw 18:47:39 for now, we may get away with just sending them an email reminder 18:47:48 yeah 18:48:03 i'll make a ticket to think about that 18:48:19 thanks! 18:48:51 i think we're done with our discussion items. the python3/bridgedb roadmap ticket we can discuss later, during our roadmapping session 18:49:44 and i would be eternally grateful if y'all could add your november highlights to our monthly report pad: https://pad.riseup.net/p/bwskP7zCeW3TTxfg_O1C 18:52:10 let's take a look at who needs reviews 18:52:39 #32300 and #29259 for cohosh; #32499 for arlolra 18:53:01 i can review #32499 18:53:29 I'm going to look at #31157#comment:14 real quick. 18:54:00 i can review #32300 and #29259. looks like a good opportunity to familiarise myself with more parts of snowflake's codebase 18:54:08 thanks! 18:59:09 i think we're done for today but let's wait if dcf1 has anything to say about #31157#comment:14 18:59:22 real quick = after the meeting 19:01:15 ok, then let's wrap it up 19:01:18 #endmeeting