18:00:24 <phw> #startmeeting anti-censorship team meeting
18:00:24 <MeetBot> Meeting started Thu Dec  5 18:00:24 2019 UTC.  The chair is phw. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:24 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
18:00:28 <phw> hi everyone!
18:00:30 <cohosh> hi
18:00:43 <phw> here's our meeting pad: https://pad.riseup.net/p/tor-censorship-2019-keep
18:01:32 <phw> we've got a big agenda today. let's get right to it and start with how we should pick snowflake stun servers
18:01:45 <phw> cohosh: can you cover this?
18:01:46 <gaba> hi
18:02:01 <cohosh> sure
18:02:08 <cohosh> we discussed adding more default stun servers to snowflake
18:02:15 <cohosh> and i came up with a list of potential servers
18:02:28 <cohosh> but i'm concerned about the privacy issue here
18:02:34 <cohosh> and so are some users
18:03:19 <cohosh> we try to be careful with metrics logging to avoid correlating client ips to use of snowflake
18:03:42 <cohosh> this widens that attack surface (not saying google's servers are great for this either)
18:04:06 <cohosh> personally i think we could just implement #25591 instead but i'm curious about other thoughts on this
18:04:59 <dcf1> Hmm, I understood #25591 differently.
18:05:51 <dcf1> I thought that a STUN server needed to receive UDP packets; for example to tell the client its own NAT mapping which it can't find out on its own. Is that something that could happen in a domain-fronted TCP connection?
18:06:17 <dcf1> I thought that #25591 was just, the broker says: "you, client, use the STUN server stun.example.com"
18:06:34 <cohosh> ah i see
18:06:47 <cohosh> what i was thinking for #25591 is we basically wouldn't use stun
18:06:51 <dcf1> I could be wrong about what we need from STUN as well.
18:07:32 <dcf1> It doesn't necessarily have to be the broker either, right? Could be an independent service?
18:07:44 <cohosh> yes that's true
18:08:23 <cohosh> we could also run (or get volunteers to run) our own stun servers
18:08:54 <phw> these would be particularly susceptible to blocking though, no?
18:09:03 <cohosh> yeah
18:10:13 <dcf1> I think the privacy concerns with choosing specific STUN servers are valid. STUN is plaintext UDP though, so essentially the same information is availble to anyone the the path as well.
18:10:46 <phw> i like the idea of basically not using stun but i don't fully understand it :)  cohosh, in #30579 you mention "We'll have to look into whether the domain fronting of the broker complicates this". does it make sense to spend a few hours understanding if this would work?
18:10:51 <cohosh> yeah i was hoping there could be a way to figure out NAT-punching information for the client over the domain fronted connection
18:11:36 <cohosh> maybe it's not possible
18:12:31 <phw> i think it's a good idea to spend just enough time to understand if it's possible (and worth doing) or not
18:12:43 <cohosh> i think the main thing we don't want is for stun servers to have logs of client ip addresses that provides some kind of evidence that these specific clients are using snowflake
18:13:24 <cohosh> ok cool, i can set aside some time to look at that then and we can add it to our roadmap
18:14:07 <cohosh> we could continue discussion on #25591
18:14:09 <phw> agreed wrt the logging issue
18:15:02 <phw> cohosh: thanks! anything else regarding stun or should we move on?
18:15:12 <cohosh> i'm good for now, thanks!
18:15:43 <phw> ok, next item is the iran shutdown. cohosh and i spent some time trying to understand what was happening, and looking for circumvention opportunities
18:16:24 <phw> note that it wasn't a complete shutdown. isps still allowed recursive dns requests for non-.ir domains
18:16:39 <phw> several people successfully used dns tunnels.
18:17:02 <phw> a dns-based pluggable transport would have been handy here, but tor's overhead may have made it very difficult to use.
18:17:17 <dcf1> It sounds like DNS over HTTPS would not have worked in this case.
18:17:55 <phw> dcf1: yes, i don't think so
18:18:14 <dcf1> BTW there was once a working prototype DNS pluggable transport by Irvin Zhan, https://trac.torproject.org/projects/tor/ticket/15213
18:18:23 <phw> for what it's worth, i once worked with a student who built a prototype of a dns-based PT. i contacted him and he re-uploaded his code to github. i forked it here: https://github.com/NullHypothesis/dnstun_pt
18:18:30 <dcf1> I get the impression it was never maintained and is somewhat abandoned.
18:18:31 <phw> he referred to his code as a "garbage fire" though :)
18:18:33 <dcf1> That's the one.
18:19:51 <dcf1> I have on my roadmap to do a transport with a turbo tunnel layer inside DNS, but I was planning to target DoH.
18:20:32 <phw> dcf1: oh, neat! either would be very useful. my hope is to pitch this to some students, and hopefully we'll find someone who can improve what we already have
18:20:33 <cohosh> dcf1: cool
18:22:03 <phw> other than that, i cannot really think of anything we could have done significantly better. it was very difficult to get a vantage point there and by the time we got access, we already mostly knew what we wanted to find out
18:22:22 <phw> besides, the government explicitly warned people that there would be consequences if it detected circumvention efforts
18:22:31 <phw> and a dns tunnel is rather trivial to detect
18:22:52 <cohosh> sounds like downloading the tor consensus was a problem for scenarious in which people did actually have some access
18:23:40 <phw> i suppose we could have asked these people to set 'PathsNeededToBuildCircuits 0.25' in their torrc
18:24:43 <phw> anyway, that's it from my side. just wanted to share this information, so we're all in the loop
18:25:07 <phw> let's move on to gettor, ok?
18:25:28 <cohosh> hiro: you around?
18:25:34 <gaba> did any of you created a ticket anywhere about the gettor in dip problem?
18:25:45 <cohosh> gaba: i did :)
18:25:50 <gaba> where?
18:26:02 <cohosh> which problem though, the PR problem or the dip repo update problem?
18:26:09 <gaba> the PR problem in dip
18:26:19 <gaba> oh, is it different from the repo update problem
18:26:30 <cohosh> yeah i made #32569
18:26:41 <gaba> thanks
18:26:44 <cohosh> for the PR problem ahf asked me to email him
18:27:10 <cohosh> err i emailed gitlab-admin@tp.o
18:27:39 <gaba> ahh yes
18:27:48 <cohosh> but in any case, i wanted to ask hiro what the current workflow for git lab merges is
18:28:20 <cohosh> I can't create a PR right now for #32480
18:28:29 <cohosh> but it will need a merge soon so we can update the database with new github links
18:29:07 <cohosh> i think hiro is the only one with permission to push to the main gettor repo but i'm not sure about that
18:29:18 <gaba> ok. It seems that hiro may not be here. I can folllow up with her as we need to fix the gitlab issue on PRs very soon
18:29:25 <gaba> and this other issue
18:29:26 <cohosh> ok sounds good
18:30:04 <cohosh> that's it from me on this topic then
18:30:09 <gaba> thanks
18:30:39 <phw> next up is the seemingly new blocking of snowflake in china
18:31:03 <cohosh> yep, it seems like something is going on but i don't have data on what specifically yet
18:31:15 <cohosh> amiableclarity has been reporting issues on trac
18:31:56 <cohosh> and i ran a probe 100 snowflakes tests in both canada and china it looks like a lot more snowflakes fail from china (about 50% compared to 10%)
18:32:26 <cohosh> and there might also be issues one there is a connection where the data is getting dropped at a higher rate
18:32:33 <dcf1> I wondered if it's because of the higher poll rate of the proxy-go instances. They could all be blocked by blocking 1 IP address.
18:32:50 <phw> fwiw, not all of the gfw's blocking is based on rst segments. relays and bridges are blocked by dropping the syn-ack segment from the relay to the client, so the gfw should be able to drop udp packets as well
18:32:51 <cohosh> dcf1: hmm that's an interesting thought
18:33:08 <cohosh> ah thanks phw
18:33:11 <cohosh> that's useful info
18:34:22 <cohosh> thanks, i wanted to give a heads up on this issue
18:34:41 <phw> dcf1: i don't follow. what do you mean by they could all be blocked by blocking 1 addr?
18:34:53 <hiro> I am here
18:34:58 <phw> hiro: hi!
18:35:01 <hiro> let me check that
18:35:05 <cohosh> phw: all our proxy-go isntances are on the snowflake bridge
18:35:11 <cohosh> hi hiro!
18:35:20 <phw> cohosh: aah, gotcha!
18:35:23 <dcf1> There are 4 standalone proxy-go, all running on 1 IP address. We configured those standalone one to poll more frequently than the web-based ones, so they account for an outsized share of the effective proxy capacity.
18:35:47 <phw> thanks for explaining, dcf1 and cohosh
18:35:51 <cohosh> dcf1: i will run some tests to determine how many unique IPs are unreachable
18:36:11 <cohosh> and put the results in #32657
18:36:33 <hiro> ok checked that, phw add push access to gettor and now cohosh too
18:36:48 <cohosh> hiro: thanks!
18:37:32 <phw> hiro: quick question while you're here: is it possible to give cohosh and me permissions to add/modify anti-censorship infrastructure in our nagios setup?
18:38:10 <hiro> what do you mean?
18:38:17 <hiro> create nagios checks?
18:38:24 <hiro> or alerts?
18:39:03 <phw> not sure what the difference is :)  for example, you set up a check (?) to monitor gettor-01. i'd like to be able to also add a check for, say, a new default bridge when we add one. ideally without having to email anyone, to reduce friction
18:39:15 <hiro> uhm
18:39:23 <hiro> I am not sure that might be possible
18:39:44 <phw> ok, no worries. in this case: is it a possibility to get a small VM on which we can run our own monitoring tool?
18:39:45 <hiro> our nagios setup might be too much entangled with our puppet and our tpo infra
18:40:18 <hiro> so if your bridge is running on tpo infra I can do that for you... and otherwise I am not sure we can
18:40:35 <hiro> we have a prometheus that people can request what they need
18:40:42 <phw> default bridges are all run outside of tor infrastructure
18:41:05 <hiro> and if you need some other monitoring tool you should request a VM with a ticket as you normally would
18:41:16 <phw> hiro: ok, i will, thanks!
18:41:17 <hiro> requesting what you actually need
18:41:26 <hiro> like what packages need to be installed and so on
18:43:08 <phw> ok, back to snowflake. since we talked about proxy-go: will scott is involved in emerald onion and asked if they can run any infrastructure for us
18:43:27 <cohosh> cool
18:43:34 <phw> so far, they're running exit relays but he offered to run whatever we need: default bridges, proxy-go instances, ...
18:43:43 <cohosh> how responsive are emerald onion?
18:43:55 <phw> i only know will and he's quite responsive
18:44:06 * Samdney arrives and is reading backlog....
18:44:13 <phw> Samdney: hi!
18:44:16 <Samdney> hi :)
18:44:27 <cohosh> the proxy-go instances need to be updated occasionally since we're still making quite a few changes to snowfalke
18:44:44 <phw> we might want to run proxy-go instances on more ip addresses. in fact, i should start mine again
18:44:53 <cohosh> so as long as they're willing to do that, it would probably increase the usability of snowflake a bit
18:45:22 <phw> cohosh: that would entail pulling from master and re-deploying the code, right?
18:45:47 <cohosh> oh i know wil
18:45:54 <cohosh> phw: yes
18:46:25 <cohosh> i actually wonder if there's a good way to remind all proxy-go deployers to do this when we have updates
18:47:17 <phw> cohosh: alright, i'll forward this to him
18:47:24 <cohosh> okay thanks phw
18:47:39 <phw> for now, we may get away with just sending them an email reminder
18:47:48 <cohosh> yeah
18:48:03 <cohosh> i'll make a ticket to think about that
18:48:19 <phw> thanks!
18:48:51 <phw> i think we're done with our discussion items. the python3/bridgedb roadmap ticket we can discuss later, during our roadmapping session
18:49:44 <phw> and i would be eternally grateful if y'all could add your november highlights to our monthly report pad: https://pad.riseup.net/p/bwskP7zCeW3TTxfg_O1C
18:52:10 <phw> let's take a look at who needs reviews
18:52:39 <phw> #32300 and #29259 for cohosh; #32499 for arlolra
18:53:01 <cohosh> i can review #32499
18:53:29 <dcf1> I'm going to look at #31157#comment:14 real quick.
18:54:00 <phw> i can review #32300 and #29259. looks like a good opportunity to familiarise myself with more parts of snowflake's codebase
18:54:08 <cohosh> thanks!
18:59:09 <phw> i think we're done for today but let's wait if dcf1 has anything to say about #31157#comment:14
18:59:22 <dcf1> real quick = after the meeting
19:01:15 <phw> ok, then let's wrap it up
19:01:18 <phw> #endmeeting