17:00:26 <hellais> #startmeeting OONI weekly gathering 2016-08-29
17:00:26 <MeetBot> Meeting started Mon Aug 29 17:00:26 2016 UTC.  The chair is hellais. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:26 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:00:37 <hellais> so, let's get this started
17:01:04 <darkk> hellais: in Soviet Russia you'll rather have github blocked (happened several times) than TorProject (never happened to the date) :-)
17:01:29 <hellais> #topic Discuss the update strategy from 1.x -> 2.x
17:01:43 <hellais> darkk: but eventually it got unblocked?
17:02:02 <darkk> hellais: true, just kidding
17:02:11 <sbs> darkk: lol
17:02:19 <hellais> :P
17:02:29 <anadahz> I think that if they blocked OONI/Tor servers updates will be our least of our conserns :)
17:02:46 <darkk> I'm serious about blocked github/wikipedia, but it was always unblocked in a day or so.
17:03:05 <darkk> So it's not a reason to avoid them.
17:03:18 <hellais> anadahz: what other concerns do you have if OONI/Tor servers are blocked?
17:03:43 <agrabeli> I think it's important to take the fingerprintability component seriously into consideration
17:03:45 <hellais> because it's a fact that they are blocked in many of the places where we operate and we have various ways of circumventing the blocking
17:04:01 <hellais> cloudfronting, tor bridges, etc.
17:04:30 <anadahz> hellais: that we 'll no be able to receive OONI measurements
17:04:58 <hellais> anadahz: we will, they can be submitted either via cloudfronting or tor hidden service (with bridges)
17:05:41 <anadahz> hellais: yeah but is there built in support that fallbacks to cloudfronting?
17:05:51 <sbs> agrabeli: +1
17:06:08 <hellais> anadahz: for collectors and test helpers yes.
17:06:37 <anadahz> hellais: :)
17:06:46 <hellais> I don't remember if it's also done for the bouncer, but if it's not yet implemented it's part of the design
17:06:51 <anadahz> but not for ooniprobe
17:07:04 <sbs> hellais: previously you mentioned of a 1.6.0.1 update that also adds the script for updating lepidopter, can you say more on that?
17:07:19 <hellais> yeah sorry we verted a bit off topic
17:07:31 <hellais> so let me provide first a tiny bit of context around this
17:08:31 <hellais> so, for the past week anadahz and darkk have been doing a feasability study for implementing automatic un-attending full OS/Image update of lepidopter.
17:09:02 <hellais> based on their analysis it seems apparent that whatever solution we are going to integrate, extend, implement, deploy will require a considerable amount of effort and time.
17:10:07 <hellais> moreover even if we are to go for the full OS update system, there are still some pi's out there that we will probably not be able to ship a new SD card to and hence we would still need to come up with some update system to trigger updates in them
17:10:38 <hellais> currently our update mechanism is based on some cronjob that run pip install --upgrade every 7 days and that is the only vector we can use to provide updates to all raspberry pi images.
17:11:02 <hellais> this means we have to work with what we have and come up with something that can work for them as well
17:11:49 <darkk> I'd say `deploy` is the hardest part as soon as on-the-fly migration for this PIs to any new scheme having OS-wide updates. Everything else _may_ be done in a similar sketchy way.
17:12:26 <hellais> given this I believe it's best to reprioritise our work for the upcoming week on a minimal update system that will work by updating only the ooniprobe software, provisioning the rapsberry pi images with the new software update mechanism and then handle the update to the 2.x series via this update mechanism
17:12:52 <hellais> paralell to this we should begin integrating 2.x into a new raspberry pi image that ships natively the update mechanism and the GUI
17:13:39 <willscott> don't the rpi's now already do a apt-get upgrade cron against the ooniprobe software?
17:14:13 <hellais> darkk: the idea is that at that in the immediate future we would not be doing on-the-fly migration to OS-wide updates. Instead we would do it in 2 steps: 1. Ship an updater via the pypi vector 2. Update to 2.x via the updater
17:14:27 <anadahz> willscott: no ooniprobe is being installed via pip, since there were no recente debian packages of ooniprobe
17:14:45 <willscott> anadahz: but does it periodically re-pip-install it?
17:14:57 <sbs> regarding on the fly update of the PIs, why wouldn't it be enough to run apt-get to fully update the OS (minus OONI)?
17:15:00 <anadahz> willscott: right
17:15:15 <willscott> isn't that enough?
17:15:16 <hellais> willscott: yes, however they actually don't do apt-get upgrade, but they do pip install --upgrade, which means that to handle raspberry pi specific updates we would have to include all this logic inside of the setup.py
17:15:45 <hellais> however setup.py is also used by others that don't run it within the context of lepidopter, so it would mean maintaining as part of the stock ooniprobe source tree a bunch of logic that is raspberry pi/lepidopter specific
17:15:48 <willscott> what other changes do you want to be able to make to the 1.x image via updates?
17:16:04 <hellais> it would be better to have this decoupled and part of something else that is only executed on the raspberry pi's
17:16:18 <sbs> hellais: yes, makes a lot of sense!
17:16:27 <darkk> sbs: as far as I see, the only reason to avoid that is (unlikely) FS corruption if power is lost during the update && upgrade.
17:16:42 <hellais> willscott: there are various changes that need to happen that are lepidopter specific. I made a partial list of them here: https://github.com/TheTorProject/ooni-probe/issues/593
17:16:47 <willscott> it sounds like 2 upgrade paths for the 1.x and 2.x images is duplicating effort to support something that seemed pretty clearly advertised as an alpha / "this may break in the future" status
17:17:00 <willscott> cool
17:18:11 <sbs> darkk: right, so there is a way of running the update (like having two partitions) that guarantees that a aborted-while-in-progress update can be restarted, correct?
17:19:03 <darkk> sbs: correct, but migration to alike schema is to complex to be done right now, so hellais suggests to postpone it. Makes perfect sense as old PIs will eventually die due to wearout :)
17:19:04 <hellais> willscott: yeah that is true, it was clear that it was an alpha and future versions may break, however a lot of partners have gone through a lot of effort to deploy these probes in very risky countries and it will be very hard for us to be able to reach them with a new SD card to do the update
17:19:06 <anadahz> willscott: true but apparently it's seems to be a big issue for people swapping SD cards
17:19:42 <anadahz> sbs: Yes a dual-copy partition method such as SWUpdate
17:20:01 <hellais> willscott: I can give you more details on this off the record, but let's just say that some of them have gone through pretty extreme length to get them into certain countries and these are countries where you can't exactly rely on post to send an SD card to
17:20:16 <hellais> also the people that have the pi's are not technically savy enough to be able to burn it themselves
17:20:26 <willscott> makes sense
17:20:43 <anadahz> sbs: another way is via partition overlays
17:20:46 <agrabeli> hellais & willscot: and in addition to this, most of the probe hosts in these countries aren't "technical" enough to make changes as needed
17:21:06 <anadahz> sbs: re: https://github.com/TheTorProject/lepidopter/pull/69
17:21:07 <sbs> darkk anadahz: thanks
17:21:12 <willscott> i guess the only other thing to think about is if the number of deployed pi's is such that it is less work to develop software for upgrade, or to manually do upgrades via ssh on each pi
17:22:08 <sbs> hellais: so, how will the update script be rolled out in practice from the context of setup.py running on lepidopter?
17:22:27 <hellais> willscott: some of these pis don't have ssh access, so that is also not something we can easily do on all of them
17:22:30 <agrabeli> willscott: there are a number of pis out there that we probably won't be able to do upgrades to via ssh
17:23:09 <darkk> hellais: but we can enable it via updater :)
17:23:22 <willscott> but how many, are those the ones you can't get an updated sd card to?
17:23:45 <willscott> (i'm not opposed to the plan put forth, just asking questions to see if there's a way to spend less developer time on this)
17:24:20 <anadahz> hellais: how many did you count last time?
17:24:21 <darkk> btw, have we ever seen a lepidopter-PI with damaged FS so far?
17:24:23 <sbs> would having ssh access (and so potentially unfettered power over the boxes) changes the terms of our agreemtns with partners and/or cause other legal issues?
17:24:30 <anadahz> darkk: yes mine
17:24:45 <anadahz> located in DE
17:25:22 <agrabeli> sbs: nothing in regards to ssh access is specified in MoUs, though some partners have been open to this
17:25:39 <hellais> sbs: good question, so basically the plan is the following. We cut a new release 1.6.1.1 and publish it to pypi. This new release runs as part of the setup.py a procedure that will 1. Remove the auto-update cronjob 2. Install updater.py and the public.asc key inside of the correct locations 3. Setup systemd to run this with an interval of 6h.
17:25:45 <anadahz> darkk: it died after ~6 months
17:26:02 <hellais> anadahz: how many what?
17:26:23 <darkk> hellais: you'd rather swap (2) and (1)
17:26:40 <anadahz> hellais: how many Pis are out there
17:26:47 <andresazp> does lepidopter pull from the mainstream pip package repository
17:26:48 <andresazp> ?
17:27:06 <hellais> darkk: sure, I mean this should all be done as a transaction, so if any of the steps fails it should revert back to the initial state
17:27:22 <agrabeli> anadahz: 20
17:27:46 <sbs> hellais: uhm, and of course these procedure is conditioned to the presence of lepidopter, right?
17:28:02 <hellais> anadahz: as agrabeli we have given out 20, but IRC there are only about 10-15 actively submitting measurements.
17:28:14 <hellais> sbs: yes of course.
17:28:57 <agrabeli> anadahz & willscott: there are 22 pis out there currently (sorry, miscounted earlier)
17:29:12 <sbs> hellais: I guess the minimal 1.6.1.1 can just do step 2 and do all the rest using updater.py, right?
17:29:22 <agrabeli> anadahz and willscott: out of all these pis, only 3 of them have tor hidden services
17:29:29 <hellais> the nice thing of doing this by replacing the setup.py with the updater is that we only have to implement the update once and we can perhaps at some point in the distant future remove this logic from the setup.py script entirely
17:29:48 <hellais> (we probably want to keep it there also for future versions just to be sure that everybody gets a chance to update)
17:30:04 <hellais> obviously it needs to also check if the updater is already installed and run only if it isn't
17:30:15 <anadahz> andresazp: yes that has the latest stable release
17:31:01 <agrabeli> we'll be shipping about 15 pis over the next month
17:31:46 <agrabeli> many of which will end up in countries that we won't have easy access to later on
17:31:53 <sbs> agrabeli: ack regarding the MoU... if possible I'd avoid us having root access on lepidopters because it increases the scope of what we can do using the probes way beyond the software we deploy using standard channels and this imho could put partners in a more troubling situation if cought, not to mention that say I have access to all lepidopters, I am compromised, and someone uses that access to do nastry thing
17:31:58 <sbs> s
17:32:17 <andresazp> If it’s Ok and there is time in the meeting I would love to share an overview of our plans for the Venezuela deployment to get feedback from you guys
17:32:46 <agrabeli> sbs: I totally agree with you.
17:32:51 <anadahz> sbs: very good point have a look at: https://github.com/TheTorProject/lepidopter/issues/35
17:33:46 <agrabeli> andresazp: yeah, we'd love to hear your overview
17:33:47 <darkk> anadahz: #35 is solvable with PAM
17:33:59 <darkk> agrabeli: compromised admin's key is harder to mitigate
17:34:12 <anadahz> andresazp: sure do you want to add this topic to the agenda (https://pad.riseup.net/p/ooni-irc-pad) ?
17:34:56 <andresazp> sure
17:35:39 <anadahz> hellais: I'm going to look at the lepidopter-update after the meeting, is there anything specific that you would like to share about the implementation?
17:36:27 <hellais> sbs: I expect it to be fairly hard to remove entirely root access from the lepidopter image without signficantly impacting our ability to expand the platform in the future. I mean one of the main reasons why we use a dedicated device is so it doesn't have any user data on it. Confidentiality of the local network is a concern, but I don't think it's eliminated by disabling root.
17:37:35 <sbs> hellais: I am not advocating against having a root user, I am advocating against us having ssh access as root
17:37:50 <agrabeli> sbs: agreed
17:37:51 <darkk> sbs: what's the difference?
17:38:12 <hellais> anadahz: I don't have anything specific to add. It's fairly simple how it works. The jist of it is 1. Check for certain github tag (called latest) that has in it's resources a file called "version" that contains the latest version number (they are ever increasing ints) 2. Download every update file from $current_version to $latest_version and on each download verify if it's signed 3. Each version execute the
17:38:18 <hellais> python update script
17:38:34 <hellais> that repo includes the update agent and the scripts for maintainers to manage the update service
17:38:57 <sbs> darkk: that we do not have the power to login to a specific raspberry and launch arbitrary commands, but we must roll out updates using our infrastructure -- which is more open and scrutinizable by third parties
17:39:51 <hellais> a gotcha is that I make the assumption the update script is indempotent so to shift the complexity into the update scripts themselves that are easier to update and the agent can stay the same in the long run
17:41:18 <hellais> anyways this is what I had to say here
17:41:32 <hellais> for me we can move onto the next topic if nobody has anything to add
17:41:52 <sbs> darkk: to further clarify, I think we should not have assh access, because I think we should not be able to run arbitrary commands on the probe in an unaccountable way, and I think this is also a safeguard for partners (one thing is if you can demonstrate what software was running, another if one can argue a partner gave a box to "foreign agents")
17:42:28 <agrabeli> sbs: +1
17:42:31 <sbs> hellais: I'll take some time to review the script more carefully and do some reasoning on its idempotence
17:43:21 <hellais> sbs: great thanks!
17:43:22 <agrabeli> in any case, I don't think we should have ssh access into partners' pis (for the reasons mentioned by sbs), though I think it is important that we are able to somehow troubleshoot remotely and that scripts get updated automatically
17:43:23 <sbs> hellais: we can talk about this offline
17:43:31 <hellais> #topic Set release date for the 2.x series
17:43:31 <darkk> sbs: I understand the point, but I'm still unsure if I accept it from engineering point of view (say, running update on 5 PIs over ssh may be MUCH easier then writing proper updater script in advance)
17:44:02 * darkk has to think more about good way to have troubleshooting access
17:45:03 <anadahz> sbs: I am also very conserned about having SSH access but it's nearly impossible to acheive this when you release only one lepidopter release
17:45:44 <hellais> re: ssh access, we could potentially have a feature exposed from the web UI that allows the user to enable and disable ssh access on demand
17:45:50 <sbs> darkk: I see your point and have similar feeling, but I guess here we need a way to strike a balance -- a possible solution could be to allow selected partners to give us ssh access in specific cases if they chose to do so (say we really don't known how to proceed and we ask one guy to enable ssh on his lepidopter - but that should not be the default)
17:46:24 <hellais> if we want to be extra careful we could even have this happen via a special account where every command executed is logged and written to an auditable log
17:46:35 <hellais> similarly to how teamviewer works
17:46:58 <anadahz> yeah GUI is an option
17:47:07 <hellais> the probe operator when they request support would go to some admin interface of the GUI and click enable remote SSH access and they get the hidden service address
17:47:14 <sbs> hellais: yep (even though, in theory, once we have root-like access -- which we would probably need to have? -- it's game over because we can subvert everything)
17:47:17 <hellais> and share it with us together with some secret
17:47:21 <anadahz> sbs: asking people to enable SSH access though it's very hard!
17:47:27 <agrabeli> wouldn't we want all future images to be updated by default?
17:47:28 <darkk> hellais: that's probably not a good idea. Too much work, too little trust to the log.
17:47:38 <sbs> anadahz: as hellais is saying, we can figure out a way to do that for them
17:48:08 <sbs> anadahz: I mean, a user friendly way to allow them to give us access to the SSH of the probe
17:48:44 <agrabeli> sure, it's easier to just ssh in to them, but that's something that shouldn't scale imho
17:48:49 <anadahz> yeah but then we need to check if this allign with the development tasks that we also need to implement..
17:49:05 <sbs> agrabeli: yes, my understanding is that we want auto-updates but I think we want that using a specific procedure not remote ssh access
17:49:30 <hellais> it's true that we could potentially subvert it, I can think of some ways in which we could make this harder for us to do (like making all commands go through some sort of proxy that uploads them to an append only log), though this is quite some over-engineering and probably for a marginal benefit
17:49:33 <agrabeli> we might be able to ssh into some of the partner pis with their consent, but do we really want users in general and across time to have this option?
17:50:01 <hellais> I think in the end if the user is OK with giving us this sort of power and it's something to be used sparingly only in emergency situations it's probably ok
17:50:28 <sbs> hellais: I agree with the marginal benefit part, here I was just trying to explore all the facts of the problem and not suggesting to do something not so simpler
17:50:28 <sbs> * simple
17:50:37 <sbs> hellais: +1
17:50:42 <agrabeli> hellais: yeah, but only in rare occassions with the consent of the partner, and not as something provided as an option in the GUI
17:51:09 <sbs> agrabeli: I guess this could be an advanced option that we can request partners to enable in specific cases
17:51:10 <darkk> I'd say having some sort of hardware toggle is a good option. E.g. reading authorized_keys from USB stick :)
17:51:41 <sbs> darkk: that's brilliant
17:51:50 <darkk> it's visual, it's obvious, it's trivial to revoke
17:51:53 <anadahz> and I guess we take into consideration that all people that use lepidopter are partners?
17:52:39 <sbs> anadahz: well, if somebody else installs lepidopter, I am not sure we would even want to consider having ssh access on their probe
17:52:42 <anadahz> which is actually not really the case
17:52:51 <sbs> anadahz: do we?
17:53:06 <anadahz> sbs: sure but how can we do this with one release?
17:53:24 <anadahz> sbs: I have raised similar concerns over time..
17:53:47 <anadahz> and it's not that easy to maintain 2 releases, or maybe is it?
17:54:26 <sbs> anadahz: I think I am missing some bits, so I do not fully understand why now you are talking about two releases
17:55:06 <andresazp> if ssh requires a signed key the partner has –or rtaher not have if he just dowload the image. Is that is still a problem?
17:55:56 <anadahz> sbs: if we could have a partner-only lepidopter release we wouldn't have to discuss about adding SSH or not now, if partners were OK with this.
17:56:53 <sbs> andresazp: yes, I think the difference is just that we give partners also the usb key and they choose, other people do not have the usb key
17:58:11 <sbs> anadahz: uhm, what about lepidopter is always lepidopter and partners can use a usb key to give us ssh access if needed?
17:58:18 <agrabeli> anadahz: I'm not sure most partners are comfortable with ssh (or even know what that is), though in general, as mentioned, I don't think we should be ssh-ing into people's boxes (whether they are partners or not)
17:59:43 <agrabeli> sbs: as darkk suggested?
18:00:23 <hellais> well whatever this option is it's going to have to be something that is to be enabled by performing some sort of action by the user, so I don't see how it's a problem to have it be a partner vs a non partner image
18:00:58 <agrabeli> sbs & darkk: I think that's a great idea, given that we do that only in limited cases, and with the consent of partners. it sounds like a better idea than including ssh access as an advanced option of the GUI, that anyone could enable (without fully understanding what they're enabling).
18:01:10 <sbs> agrabeli: yep
18:01:13 <hellais> that is if they are a partner they will perform the action (insert the USB stick, click on something in the GUI, jiggle the power cord twice and do a handstand, etc.) if they are not a partner they will not perform this action
18:01:14 <darkk> well, let me introduce 'degradation levels' for the lePIdopter: 1) it works ok 2) it may be repaired with ssh 3) SD card has to be replaced (bootloader & both root images are damaged) 4) whole PI has to be replaced (PSU failure, for example)
18:01:16 <anadahz> sbs: increased complication as not all partners have physical access to Pis
18:01:21 <darkk> the question is  -- do we need step (2)
18:01:45 <darkk> may we may move to (3) any time we need (2) ?
18:01:52 <hellais> if we want to "enforce" that this action is not possible by non-partner probes, this could just be a matter of adding something to the setup wizard where the user declares if they are a partner or not
18:02:20 <anadahz> It seems that the lepidopter requirements are changing with the times and per discussion :P
18:03:30 <sbs> anadahz: I see, well, we should probably find the simplest solution that avoids exposing partners too much
18:03:58 <sbs> darkk: case 2) is probably a failure in the updater right?
18:05:00 <anadahz> as willscott mentioned lepidopter alpha mentioned to be an alpha. But it seems that we have deployed a bunch of Pis to people that cannot really replace the SD cards and now we find the most optimal ways to work around this.
18:05:12 <anadahz> s/mentioned/meant
18:05:23 <darkk> sbs: I can imagine several cases. Actually, it's `any case that needs human intelligence to solve`. It's something unplanned & unpredictable going wrong with software.
18:05:59 <darkk> sbs: anything that can't be solved by reboot and auto-wipe
18:07:00 <agrabeli> anadahz: yeah, I guess the problem comes down to how we roadmapped our deliverables. We probably should have only have started deploying probes once lepidopter was stable, and including all these features....but then again, if we had waited for that, we wouldn't have the measurements and country reports on time....ah well. :)
18:07:36 <andresazp> per my limited experience, we would’ve needed to change at least 4 SD cards in a few months of operation if we didn’t had SSH cause of an ufoseen problem
18:07:39 <anadahz> agrabeli: True!
18:08:34 <sbs> darkk: okay, right... I think I know too little about managing raspberry pis to estimate the likelyhood of this kind of problems, I trust the experience of anadahz and yourself
18:08:37 <anadahz> agrabeli: I'm describing the issue to the rest of the people to understand how we came to this solution
18:08:46 <darkk> andresazp: do you mean bricking lePIdopter via ssh unexpectedly?
18:10:03 <andresazp> We didn't use lepidoper, but my point was in support of the option for ssh access for unforseen problems if hte user allows
18:11:05 <andresazp> In our case, the problem we had would have not been fixed with running pip update
18:11:58 <darkk> andresazp: are the cases documented somewhere? The information regarding real failure scenario may be useful.
18:12:57 <darkk> I'd love to see them documented in ooni-operators@ mailing list :-)
18:13:13 <agrabeli> since we're running out of time, should we proceed to andresazp's topic (and cover topic 2 last)?
18:13:39 <agrabeli> darkk: +1
18:14:00 <sbs> agrabeli: yep!
18:14:13 <andresazp> @darkk I will try to write there a bit, some machines we expect had FS or Hw problems are being collected but we still dont have them
18:14:36 <andresazp> First of all, were you able to include our previous VE reports from to your DB?
18:14:59 <hellais> sounds good
18:15:31 <hellais> #topic Update from Venezuela
18:16:26 <hellais> andresazp: not yet, I have a copy of the big tarbal you gave me, but we have been holding back on integrating them to getting the pipeline into a more stable state (bulk loading of many measurements with the current system seems to not make the pipeline happy) and we also have some space issues
18:16:27 <andresazp> I shared the link to the compressed reports on an old meeting. It was on someone’s  etherpad grain on a self-hosted sandstorm inscance
18:16:42 <andresazp> ok
18:17:34 <andresazp> So our plans for VE are more or less the following
18:19:07 <andresazp> Deploy at least 4x the Rpi as in the pilot – we want to use ooni 2.x but is not guaranteed as of yet
18:20:16 <andresazp> in 12 or more cities
18:21:15 <andresazp> it’s possible that we might use lepidopter, but i’m not sure
18:21:17 <andresazp> we will run our own backend, but would love help configuring it so that it report
18:21:30 <andresazp> so that it reports automatically to your pipeline
18:22:06 <andresazp> (asuming those parts still work as I understtod them)
18:23:07 <andresazp> we would run another server in python/django that would help us visuallize the collected data by our probles
18:23:47 <andresazp> taking in part the role of the ooni pipeline, the official pipeline should get all reports nontheless
18:26:14 <hellais> andresazp: this is great!
18:26:17 <hellais> a couple of questions:
18:26:28 <andresazp> It’s just smaller solution easer to get it going and intagre with other things.  Here we would report “incedets” that cohesibly and contextually report an specific incent (not a measuement)
18:26:28 <hellais> 1) What is the size of the deployment you are looking at?
18:26:29 <andresazp> this would be for internal consuption but the code will be open
18:26:47 <hellais> 2) What is the timeline for the project (i.e. when are you going to begin collecting measurements)
18:26:50 <andresazp> 60 to 70 probles
18:27:01 <andresazp> probes*
18:27:24 <anadahz> andresazp: 60-70 probes in VE?
18:27:25 <andresazp> finally we would setup a website to check on all of the incedets
18:28:49 <agrabeli> andresazp: this all sounds fantastic!
18:28:55 <andresazp> an incident, conceptually, may reflect information form many measuemets/reports/probes – even difernets tests ortargets as long they are related, and put in context
18:30:21 <hellais> andresazp: that is a very interesting concept. I wonder if the concept at least could then be extended also to the explorer
18:30:29 <andresazp> simillarly as you weould say that police was vviolently disperssing a peaceful protest, and include a list of injuried people rateher than repeat X person was beated in LOCATION during the protest on DATE
18:30:38 <andresazp> wouldn’t
18:30:54 <andresazp> would*
18:30:57 <hellais> andresazp: these incidents are recorded by having trusted people review the measurements or are you also thinking of crowdsourcing the analysis?
18:32:02 <andresazp> someone with privilges to our server would post the indents or updtae them
18:32:36 <andresazp> after reviewing measurements
18:33:14 <andresazp> that server could get really complicated really fast, so we are keeping things simple for the being
18:33:42 <andresazp> We are working on developing a few extra tests, also
18:34:50 <andresazp> speed test, despite being a slow system, we believe we can saturate the connection, most common speeds are under 1mbit in ideal conditions
18:35:03 <andresazp> locally
18:35:36 <hellais> andresazp: you should look into measurement-kit for running speed tests. It has a pretty good implementation of NDT.
18:35:46 <hellais> and it also supports submitting the results to an ooni collector
18:35:53 <andresazp> I see
18:36:27 <andresazp> we have a simple implementarion that ran outside of ooni on the pilot but thought of integarting it
18:36:42 <hellais> andresazp: do you have an estimate of when the first measurements are to be coming in?
18:38:36 <andresazp> we wwnat to do something like the whatsapp conectivity, and VPN block tests for tunnel bear (site previusly blocked), Zello (a push to talk radio app that has been blocked) and hotspotshield
18:39:15 <sbs> andresazp: what system are you using for running measurements?
18:39:47 <andresazp> the pilot had rpi 2, rpi 2+ and ocacionally a b
18:39:59 <andresazp> all the new ones are 3b
18:40:02 <hellais> andresazp: if you haven't seen it you may be interested in this whatsapp test I wrote some time ago: https://github.com/TheTorProject/ooni-probe/blob/feature/whatsapp-test/ooni/nettests/blocking/whatsapp.py
18:40:40 <hellais> it's not really thorough, that is it doesn't actually speak the whatsapp protocol, but it does to a general connectivity test towards all the whatsapp endpoints used in the mobile app and web app
18:40:54 <sbs> andresazp: I was already working to make debian packages for measurement-kit
18:41:35 <sbs> andresazp: and apart from that, it you're interested I can help you to add measurement-kit for running NDT in your use case
18:42:01 <andresazp> That’s what we see, and pretty much would replaicate the approach unless service wants to partner help us make it a bit more sofisticated
18:42:26 <andresazp> SBS: That would be great
18:43:12 <andresazp> possibly this would be required for zello IIRC
18:43:46 <anadahz> andresazp: Since ooniprobe v2 is going to be the new stable release it will be really helpful if you could test ooniprobe 2.0.0 that will be shipped by default in lepidopter beta --planned to be released by the end of this week.
18:44:11 <andresazp> anadahz: Carlos is tesing it
18:44:25 <anadahz> nice°
18:45:05 <andresazp> There are some features that we need on v2 though, and I’m not sure if we might be able to implement or re-implement
18:45:10 <andresazp> things form v1
18:45:32 <hellais> andresazp: great! Reports on any bugs or feature requests  would be super useful.
18:45:44 <hellais> andresazp: are there features that were present in v1 that you are missing in v2?
18:46:12 <andresazp> comments in some form. that we could use a dict for diferent values
18:46:26 <andresazp> or maybe just a sting
18:46:31 <andresazp> string*
18:46:37 <andresazp> that we could parse
18:46:46 <andresazp> either in the ooni settings
18:47:24 <andresazp> or as a flag or option when running a deck. I understand that running tests changed a lot on v2
18:48:03 <hellais> andresazp: you can do this in the new deck specification as well, by means of the annotations.
18:48:07 <sbs> andresazp: to better understand, you are talking about the deck format, correct?
18:48:38 <hellais> you can specify these both as global (for the whole deck) or task specific
18:51:08 <hellais> for example like this: https://gist.github.com/hellais/d4fce0a27ee18105e990e955d3de1df7
18:51:38 <andresazp> I havent been able to test v2 myself much, but I thought taht feature wans’t ready
18:52:21 <hellais> I see, it
18:53:08 <hellais> it should be working and I have done some testing of annotations as well. Though the best thing is if you report any type of difficulty you encouter with v2 or even if you need some specific feature or adjustment to be implement either as github ticket or even just by sending me an email
18:53:09 <andresazp> I believe Carlos tried to test it
18:53:30 <andresazp> but we will certainly double test
18:53:37 <andresazp> For a bit of conext:
18:54:10 <andresazp> On the VE pilot the annotations includuded hardcoded information like probe ID, ISP, city and wather te test was run by cron or manually by one of us.
18:54:11 <andresazp> The commands concatenated the request to run a certaing test, list with the contests of a file that ID’d the probe
18:56:15 <andresazp> contents* , sorry the lack of spellchek is killing me
18:57:29 <andresazp> In this we might be more able to help, is what we discussed about adding a flag for web_conectivity so that it doesn’t necesairly run HTTP_request
18:57:49 <andresazp> for bandwith reasosns
18:58:16 <andresazp> I’m all ears for any more feedback and also critisims
18:58:32 <hellais> yes I understand. I guess you could achieve the adding of this metadata by generating the decks on a per probe basis.
18:58:36 <andresazp> criticism of how we plan to do it
19:00:07 <hellais> andresazp: I fear that not running the http request part of the web_connectivity could lead you into not being able to identify various cases of censorship that don't rely on DNS based blocking. The bandwidth consumption of the web_connectivity test is much less than http_requests, but it's still much more than just doing the DNS resolutions obviously.
19:01:20 <hellais> I wonder how much mileage you would get from not running the http request only when the DNS doesn't match.
19:01:46 <andresazp> we would run the complete web_conectivity, including the http_request part, but not as freceuntly, probbaly just at night
19:02:13 <hellais> ah ok got it
19:03:00 <andresazp> DNS and TCP would ID virually all if not every case of internet block or censorship ever implemented in VE
19:03:55 <hellais> understood. I do see some benefit of implementing this in the web_connectivity
19:04:03 <hellais> it shouldn't be too hard to do either
19:04:06 <andresazp> DNS is by far the most common,
19:04:06 <andresazp> Blocked IPs was somewhatr common, but currently not used
19:05:40 <hellais> also it seems like it would be useful to have per-test scheduling information specified in the deck descriptor
19:05:51 <andresazp> You will probably find simmliar needs in countries with less sophisticated censorhsip programs and low conection speeds
19:06:05 <hellais> yes that makes sense
19:06:55 <andresazp> For our pilot we had a pretty specific sheduling for diferents tests and even diferent lists
19:07:37 <hellais> andresazp: it would be cool if you could share some information about this on some ticket on github so we can see to add it as a future feature.
19:07:39 <andresazp> so a high priority list would run more frecuently in order to have early data on critical election-related sites beign bloched
19:08:40 <andresazp> ok
19:08:46 <hellais> thanks for sharing with us your progress on this and please to keep us updated on how it moves forward and if there is anything we can do to help you!
19:09:50 <andresazp> Our release plan is:
19:10:01 <andresazp> early version of our server by Dec-Jan
19:10:25 <andresazp> early version of public site by Feb
19:11:55 <andresazp> Fiest probles on the street and new ooni-backend instance properly configured by November
19:12:05 <andresazp> or october
19:15:35 <hellais> November or October of this year?
19:15:40 <hellais> that is the server and public site will happen after the probes are deployed?
19:15:41 <andresazp> The first thing we might need help is in configuring a ooni v2 compatible ooni-backend
19:15:42 <andresazp> For the pilot we bearly got it set up
19:16:11 <hellais> sure we can help with that. The v2 probes however are backward compatible with older backends as well
19:16:29 <andresazp> we might have a priblem in our old server, then
19:16:44 <hellais> ok, we can speak more about this out of band
19:16:49 <andresazp> ok
19:17:02 <hellais> if there is nothing else to add I would say we can end this with a "slight bit of delay" :P
19:17:05 <andresazp> the public server is  public site that shows info on ïncedents”
19:17:14 <andresazp> .. sorry
19:18:18 * graphiclunarkid waves - lurking in the meeting but has nothing to contribute this time.
19:19:01 <hellais> cool
19:19:14 <hellais> in that case thank you all for attending and have a good week!
19:19:16 <hellais> #endmeeting