13:59:57 <hellais> #startmeeting OONI Community Gathering 2019-04-30
13:59:57 <MeetBot> Meeting started Tue Apr 30 13:59:57 2019 UTC.  The chair is hellais. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:59:57 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:00:26 <slacktopus1> <agrabeli> Hello friends! Welcome to the April OONI Community Meeting :slightly_smiling_face:
14:00:51 <slacktopus1> <babatunde.okunoye> Hello Maria
14:00:52 <slacktopus1> <agrabeli> As a reminder: We're discussing topics in the agenda, which you can contribute to: https://pad.riseup.net/p/ooni-community-meeting
14:00:55 <slacktopus1> Action: sbs waves
14:01:31 <slacktopus1> <agrabeli> Please feel encouraged to introduce yourselves asynchronously as you join :slightly_smiling_face:
14:02:14 <slacktopus1> <agrabeli> So far, we have 2 topics in the agenda. If there's something else you'd like to discuss, please add it: https://pad.riseup.net/p/ooni-community-meeting
14:02:33 <slacktopus1> <agrabeli> So starting from topic #1
14:02:52 <slacktopus1> <agrabeli> #1. NDT problem in Iran. + NDT7
14:03:11 <slacktopus1> <agrabeli> @xhdix would you like to share some words?
14:06:17 <slacktopus1> <agrabeli> What is the problem with NDT in Iran? Does it not work locally..?
14:06:54 <slacktopus1> <xhdix> I've talked about it before. https throttling in Iran becomes more intense every day. But because of the boycott of Google, there is no way to test performance in Iran. This will reduce the transparency of censorship in Iran.
14:08:08 <slacktopus1> <xhdix> More details needed?
14:08:35 <slacktopus1> <hellais> I guess some things to point out in relation to this are the following:
14:09:15 <slacktopus1> <hellais> • My understanding of the current major limit to NDT testing in Iran is that MLab NS, since it’s run on google app engine, is doing server-side censorship of clients from Iran
14:09:44 <slacktopus1> <hellais> Our plan for working around this limitation is that of running a proxy on top of mlab NS to forward requests coming from users in censored locations and possibly use this method as a failover
14:10:27 <slacktopus1> <hellais> This issue affects not only users in Iran, but also at least people in Cuba as well and possibly more
14:10:29 <slacktopus1> <sbs> The implementation of ndt7 will take a vector of mlab-ns like services and fallback to said proxy on error from the main mlab-ns URL
14:11:14 <slacktopus1> <hellais> • WRT measuring HTTPS throttling I am not 100% sure “just” having an NDT test is going to be enough and we may need to at least have a baseline to compare it too
14:11:17 <slacktopus1> <xhdix> ( i think: Crimea, Cuba, Iran, North Korea, Syria)
14:11:54 <slacktopus1> <hellais> Perhaps doing NDT7 + NDT legacy side by side could be one way to tackle differentiating throttling targeting HTTPS traffic in particular
14:12:35 <slacktopus1> <hellais> Though I should highlight that, at least according to my limited understanding of the tests in questions, the goal of NDT is not that of detecting protocol based throttling
14:13:02 <slacktopus1> <hellais> This is something we are in general interested in measuring and perhaps NDT7 is a stepping stone in that direction, but it’s not necessarily all that is needed
14:13:17 <slacktopus1> <hellais> @sbs is more knowledgeable about this subject and perhaps has something to add on this
14:13:53 <slacktopus1> <sbs> @xhdix this is a direct link to the nearest server for running ndt7
14:13:56 <slacktopus1> <sbs> https://bassosimone.github.io/ndt7/#mlab4-bom02
14:14:16 <slacktopus1> <sbs> that’s the JavaScript implementation, we’re working on other implementations
14:14:48 <slacktopus1> <sbs> with respect to protocol throttling:
14:14:56 <slacktopus1> <sbs> - ndt7 uses https
14:15:03 <slacktopus1> <sbs> - is a possible baseline
14:15:43 <slacktopus1> <sbs> the need to have detection of protocol throttling seems to justify more time spent on that topic
14:16:03 <slacktopus1> <sbs> as @hellais mentioned, there needs to be a comparison between two protocols
14:16:20 <slacktopus1> <sbs> I believe we can take advantage of TCP BBR to have good kernel level measurements
14:16:48 <slacktopus1> <sbs> (but I also think ndt7 in itself would not be enough and we’ll need to have a version of it tailored for OONI needs)
14:16:57 <slacktopus1> <sbs> another aspect to keep in mind is the location of servers
14:17:17 <slacktopus1> <sbs> the server referenced in the above URL is in India (Bom = Bombay?)
14:17:46 <slacktopus1> <sbs> a project to characterise throttling _may_ need to servers to test with and possibly some of them inside Iran
14:18:25 <slacktopus1> <sbs> and here’s another aspect to keep in mind
14:18:46 <slacktopus1> <sbs> we may want to extend our HTTP client to measure the speed of a download from any server
14:18:56 <slacktopus1> <sbs> and take advantage of large resources
14:19:03 <slacktopus1> <sbs> this is not as good as controlling the server, sadly
14:19:11 <slacktopus1> <sbs> but still it’s an interesting data point
14:19:25 <slacktopus1> <sbs> that’s probably my `${braindump}` on the topic; EOF
14:19:53 <slacktopus1> <hellais> :+1:
14:22:14 <slacktopus1> <agrabeli> Great, is there anything else to discuss on this topic?
14:22:53 <slacktopus1> <sbs> not from me :slightly_smiling_face:
14:23:09 <slacktopus1> <sbs> I’ll be interested to follow up with @xhdix when ndt7 is more mature
14:23:29 <slacktopus1> <agrabeli> Moving onto the next topic
14:23:42 <slacktopus1> <agrabeli> #2. Starting to use a Go engine
14:23:49 <slacktopus1> <sbs> yeah
14:23:56 <slacktopus1> <sbs> since I added the entry maybe I can comment on it
14:24:18 <slacktopus1> <sbs> - the speed at which MK is progressing is not satisfactory to me
14:24:32 <slacktopus1> <sbs> - some tests we’re going to import are written in Go (e.g. Psiphon)
14:24:43 <slacktopus1> <sbs> - writing and maintaining Go is easier than C++
14:24:58 <slacktopus1> <sbs> - recompiling for mobile is _much_ quicker (seconds vs. hours)
14:25:15 <slacktopus1> <sbs> - binding an easier-to-use API for mobile devices entails much less toil
14:25:31 <slacktopus1> <sbs> - at the same time, Go binaries are larger
14:25:42 <slacktopus1> <sbs> then:
14:25:49 <slacktopus1> <sbs> - adding Psiphon is something we want to do
14:26:02 <slacktopus1> <sbs> - that means we already pay the cost of the Go runtime
14:26:34 <slacktopus1> <sbs> - adding more tests written in Go means it’s “cheap” because the Go runtime + Psiphon is already big anyway
14:26:49 <slacktopus1> <sbs> - (e.g. ndt7 in Go is much simpler and more robust than in C++)
14:26:50 <slacktopus1> <sbs> yet:
14:27:09 <slacktopus1> <sbs> - we cannot just replace MK (it would be an engineering nightmare)
14:27:30 <slacktopus1> <sbs> - we probably will have two “libs” side by side (one in C++ and one in Go)
14:27:43 <slacktopus1> <sbs> - we’ll try to have more and more stuff in go
14:27:44 <slacktopus1> <sbs> still:
14:28:12 <slacktopus1> <sbs> - this means larger apps (to quantify, say +50 MiB by excess)
14:28:34 <slacktopus1> <sbs> - we can perhaps take advantage of slicing (i.e. package one app for arm, one for arm64, etc.)
14:28:57 <slacktopus1> <sbs> questions:
14:29:01 <slacktopus1> <darkk> @sbs have you taken a look at Vinicus'es post at traffic-obfs ML about their attempts to strip down go runtime into iOS VPN limitations?
14:29:22 <slacktopus1> <sbs> @darkk I did not, that’s a good tip
14:29:37 <slacktopus1> <sbs> I’d like to understand from users what type of issue a larger app would cause
14:29:45 <slacktopus1> <darkk> Yep, it may be interesting. Sorry for interrupting :slightly_smiling_face:
14:30:02 <slacktopus1> <sbs> I’m particularly interested in knowing the issues and in applying easier mitigations (if needed) first
14:30:17 <slacktopus1> <sbs> I’d like to avoid replacing the C++ toil with Go packaging and shrinking toil
14:30:33 <slacktopus1> <sbs> (as I’d like to have more time to invest in data quality, research, and new experiments)
14:30:36 <slacktopus1> <sbs> so:
14:30:55 <slacktopus1> <sbs> - how big of a problem would be an app +50 MiB bigger than now?
14:31:00 <slacktopus1> <sbs> EOF
14:31:13 <slacktopus1> <sbs> Thanks for mentioning it :slightly_smiling_face:
14:32:04 <slacktopus1> <hellais> FTR this is the thread in question: https://groups.google.com/forum/#!topic/traffic-obf/PksmyfHMUb4
14:32:52 <slacktopus1> <sbs> The bottom of the thread seems to deal with memory usage
14:33:01 <slacktopus1> <sbs> (checking whether I wrote app size)
14:33:57 <slacktopus1> <hellais> It’s also probably worth speaking to folks like the psiphon people which have an Android app which is 8.78 MB
14:34:42 <slacktopus1> <darkk> Yes, but those correlate sometimes, so, maybe that's still interesting :slightly_smiling_face:
14:34:58 <slacktopus1> <hellais> I think having an app which is > 50 MB is quite suboptimal
14:35:11 <slacktopus1> <hellais> Already currently our app is pretty large on average, at 27.5 MB on iOS
14:35:18 <slacktopus1> <agrabeli> @sbs I guess having a bigger app may mean that people are more likely to uninstall it in favor of making space for more apps
14:35:44 <slacktopus1> <hellais> Then again you have apps like Facebook which are 256MB, so I don’t know
14:36:10 <slacktopus1> <hellais> I think this is the sort of question we should probably do some user research on to try to better understand how and to what extent it would affect our userbase
14:36:50 <slacktopus1> <hellais> Probably after we have better understood how much bigger it will in fact turn out to be and if there isn’t something we can do to further limit the bundle size
14:36:57 <slacktopus1> <sbs> > Then again you have apps like Facebook which are 256MB, so I don’t know  Well, I guess that app transfers a lot of perceived value to its users. And in general that is of course part of the equation.
14:37:29 <slacktopus1> <hellais> FTR Psiphon on iOS is actually 28.3 MB
14:37:35 <slacktopus1> <sbs> > […] we should probably do some user research  Indeed, that’s why I am asking this question in this context :slightly_smiling_face:
14:37:55 <slacktopus1> <sbs> yeah
14:39:07 <slacktopus1> <agrabeli> @sbs yeah, that's a good point. I guess dedicated information controls researchers are more likely to keep OONI Probe installed, no matter how big the app is, whereas other, less committed users, would probably uninstall it if it takes up too much space. The problem is that we need both types of users, in order to have a larger and more diverse volume of measurements over time.
14:41:22 <slacktopus1> <babatunde.okunoye> Yes, from my own experience in conducting tests around the Nigerian elections, which increases the data footprint of the OONI app (App size + test results data), my phone always suggested I delete the OONI app to create more space
14:41:23 <slacktopus1> <hellais> I think we are going to get a more measurable and accurate answer to this question if we were to design a user survey that we distribute, for example, via push notifications to our user base.
14:42:30 <slacktopus1> <hellais> It can be a simple survey which asks 2-3 simple questions and we circulate it to all our active user base. We have seen in doing the usability study for the 2.0.0 app that if we do that we are going to get a **lot** of input from the community
14:42:39 <slacktopus1> <hellais> Even more if it’s just asking a very simple question as this one
14:43:15 <slacktopus1> <hellais> This is a very good point. We are in fact also working on a feature that will automatically delete measurements from the device once they are uploaded, which will hopefully at least reduce the footprint of the test result data.
14:43:41 <slacktopus1> <sbs> @hellais it’s simple but not straightforward, because we need to weigh on the one side the increased app size and on the other the increased capabilities we will have. Yet, since these are just theoretical, it may be complex to ask the right question.
14:44:01 <slacktopus1> <hellais> This is the github issue in question: https://github.com/ooni/probe/issues/810
14:45:10 <slacktopus1> <hellais> I think getting answers to these questions, though, can inform how we prioritize dev work. For example if app size is not too important to users maybe we don’t have to spend too much time optimising the app bundle size, if it is maybe we should spend more time on that
14:45:38 <slacktopus1> <sbs> Yes, that’s why we need to ask the right questions
14:45:43 <slacktopus1> <sbs> (it seems we are in agreement here)
14:45:45 <slacktopus1> <hellais> Given that there are apps which link to golang which are less than 50MB (ex. psiphon) it seems reasonable to assume that it is possible to obtain something within that range of size, no?
14:46:02 <slacktopus1> <sbs> Yeah, as I mentioned there are options like:
14:46:10 <slacktopus1> <sbs> 1. dropping some architectures (x86 and x86_64)
14:46:30 <slacktopus1> <sbs> 2. one APK per architecture
14:46:43 <slacktopus1> <sbs> at least on Android (I don’t know about iOS)
14:47:02 <slacktopus1> <sbs> To summarize:
14:47:11 <slacktopus1> <sbs> 1. a 50 MiB app is not so good
14:47:39 <slacktopus1> <sbs> 2. Psiphon is smaller than that, so it’s possible (maybe they drop archs or have one APK per arch)
14:47:49 <slacktopus1> <sbs> 3. we should ask users how important is this
14:47:57 <slacktopus1> <sbs> 4. this will inform our prios
14:48:12 <slacktopus1> <sbs> 5. just make sure we explain how Go would make us more productive so the apps more valuable
14:48:19 <slacktopus1> <sbs> anything else?
14:48:45 <slacktopus1> <hellais> Yeah I think that sums it up
14:49:00 <slacktopus1> <hellais> Should we move to the next item?
14:49:17 <slacktopus1> <agrabeli> We only have 10 minutes left, so I think so
14:49:39 <slacktopus1> <agrabeli> (In the meanwhile, if anyone else has thoughts on app size, please reach out to us)
14:50:04 <slacktopus1> <agrabeli> #3. Sourcing more in-country collaborators for OONI testing
14:52:07 <slacktopus1> <babatunde.okunoye> Yes I asked that question. I noticed they've been a number of social media black-outs in Africa this year and elsewhere, without OONI publishing measurement results from there. Why is this the case. OONI explorer data suggests OONI tests are/were once conducted in some of these countries
14:52:33 <slacktopus1> <babatunde.okunoye> Do you we need to reach out to new active testers
14:52:54 <slacktopus1> <babatunde.okunoye> For instance there was the Benin disruption this week
14:53:19 <slacktopus1> <agrabeli> We'll be publishing a report on Benin soon
14:53:34 <slacktopus1> <babatunde.okunoye> Venezuela and other places too
14:53:53 <slacktopus1> <babatunde.okunoye> Oh
14:54:11 <slacktopus1> <agrabeli> In general, the availability of measurements depends on whether OONI Probe users run tests on the ground
14:54:39 <slacktopus1> <hellais> @andresazp has been doing great work on reporting incidents in Venezuela and coordinating OONI Probe testing.
14:55:05 <slacktopus1> <agrabeli> A number of censorship events (in Venezuela and around the world) have been measured by OONI Probe users and published on OONI Explorer (even though we may have not published reports on them)
14:55:12 <slacktopus1> <andresazp> I can say venezuela has changed how they implement SNI filtering
14:55:26 <slacktopus1> <babatunde.okunoye> Oh, ok
14:55:28 <slacktopus1> <andresazp> and sis currently targeting many social media platforms
14:56:33 <slacktopus1> <hellais> Yeah the tricky thing is that I bet there is more censorship out there that is happening than the OONI team has the capacity and resources to write research reports on, which is why all our data and tools are out there so anybody can take the data and do their own analysis and investigation
14:57:05 <slacktopus1> <hellais> In practice this means that the information on what is happening is not centralised by us and sometimes not everybody is aware of everything that is going on
14:57:06 <slacktopus1> <babatunde.okunoye> Okay, that makes sense
14:57:24 <slacktopus1> <agrabeli> We generally support decentralized efforts in monitoring internet censorship, which means that everyone is encouraged to engage others with the use of OONI Probe and OONI data in order to investigate internet censorship in their countries.
14:57:42 <slacktopus1> <hellais> That said more measurements and more testing is always good and needed.
14:58:11 <slacktopus1> <hellais> It’s important, though, also to highlight that doing this research and work in a way that is technically and research wise sound does take some time
14:58:30 <slacktopus1> <agrabeli> @babatunde.okunoye to answer your question: Yes, the more people are engaged with testing, the better. :slightly_smiling_face: This is particularly the case because more coverage across more networks would allow for more comprehensive testing over time.
14:58:42 <slacktopus1> <hellais> So be skeptical when you see people putting out “technical” research very quickly with dodgy documentation on how the “research” was carried out
14:58:47 <slacktopus1> <xhdix> Until a few months ago, I didn't know what exactly is OONI. (And so many of my friends)
14:59:13 <slacktopus1> <hellais> Any reference to real facts or persons is purely coincidental
14:59:58 <slacktopus1> <agrabeli> So yeah, it would be great if you could all engage your friends and networks with the use of OONI Probe: https://ooni.io/install/ :slightly_smiling_face:
15:00:26 <slacktopus1> <agrabeli> This tool allows you and your communities to investigate internet censorship and collect network data, showing what's happening, in real-time.
15:00:40 <slacktopus1> <agrabeli> And all results are automatically published in an open way in 2 platforms:
15:00:53 <slacktopus1> <agrabeli> * OONI Explorer: https://explorer.ooni.io/
15:01:04 <slacktopus1> <agrabeli> * OONI API: https://api.ooni.io/
15:01:24 <slacktopus1> <agrabeli> You can download OONI data from the above, and use it as part of your own research and advocacy.
15:01:51 <slacktopus1> <agrabeli> OONI data does contain evidence of censorship in Benin, Sri Lanka, and many other places that have recently been reported.
15:02:12 <slacktopus1> <agrabeli> We encourage you to make use of the data, and to explore it to uncover any unreported cases!
15:03:16 <slacktopus1> <agrabeli> The aim with OONI Probe is not just to confirm censorship cases (that people notice and report on anyway), but to also potentially uncover internet censorship. It's an investigatory tool.
15:04:07 <slacktopus1> <agrabeli> How can we best support community engagement efforts?
15:04:30 <slacktopus1> <agrabeli> Are there ways that the OONI team can support you in engaging your local communities?
15:06:46 <slacktopus1> <babatunde.okunoye> Yes you're doing some of that already - improving the quality of your information
15:07:21 <slacktopus1> <agrabeli> Measurements on censorship events in Africa and elsewhere are automatically published everyday
15:07:34 <slacktopus1> <agrabeli> We acknowledge though that OONI Explorer is pretty hard to use, currently
15:07:51 <slacktopus1> <agrabeli> We're working on revamping OONI Explorer and will launch the beta soon - stay tuned :slightly_smiling_face:
15:08:29 <slacktopus1> <pellaeon> awesome, thanks!!
15:08:32 <slacktopus1> <agrabeli> We're happy to discuss this further, and if you folks have feedback / suggestions for us, please share them anytime!
15:09:36 <slacktopus1> <agrabeli> We're 8 minutes past the end of the meeting, so unless there's something urgent, we'll end now
15:10:05 <slacktopus1> <agrabeli> Thanks folks for joining today! :slightly_smiling_face:
15:10:08 <hellais> #endmeeting