00:16:30 #startmeeting prop280 00:16:30 Meeting started Wed Sep 13 00:16:30 2017 UTC. The chair is teor. Information about MeetBot at http://wiki.debian.org/MeetBot. 00:16:30 Useful Commands: #action #agreed #help #info #idea #link #topic. 00:16:58 So we started meetbot late, I'll replay the open questions and then we'll get to question 1 00:17:14 The privcount in tor proposal is here: https://gitweb.torproject.org/torspec.git/tree/proposals/280-privcount-in-tor.txt 00:17:26 It deals with the low-level blinding, noise, and aggregation, specific statistics are for a later proposal 00:17:38 1. How do we make sure the protocol survives data collector and tally reporter failures? (recall data collectors are on tor relays, and tally reporters combine the share keeper and tally server roles) 00:17:43 2. How do we make sure the protocol survives outlier or broken measurements from relays? 00:18:16 3. How do we make sure the added noise is sufficient, particularly as we add and remove statistics? What if we think more noise is safer? What if we want to add different noise? (This might be out of scope) 00:18:24 4. (how) do we measure different statistics over different time periods? 00:18:54 On to question1 00:19:22 So the proposed design for redundancy is to have multiple subsets of tally reporters, where each subset of tally reporters handles a subset of data collectors 00:19:34 seems like question1 depends in part on how we're going to pick the tally reporters (and how many) 00:19:59 For example, we have 9 tally reporters in 3 sets of 3, and each set handles 1/3 of the relays 00:20:07 armadev how so ? 00:20:21 it leads to different trust assumptions 00:20:25 I've seen three ideas here: 00:20:33 "do nothing; assume everybody's honest" 00:20:35 like, "we'll have the dir auths run them" vs "rob and aaron and teor will run them" 00:20:53 "use different instances of the algorithm with different members; hope one works." 00:21:12 "as above but use k-of-n secret sharing instead of multiple instances" 00:21:26 i haven't seen either of the last two fully worked out 00:21:55 secret sharing seems like the best to me to deal with TR failures 00:22:03 +1 00:22:09 (given the challenge of getting the dir auths to run bandwidth authorities [which to be fair is a code quality issue as well], let's make a design that doesn't require that level of trust) 00:22:30 My objection to secret sharing as it stands is "there is no specification and nobody has volunteered to write one" 00:22:38 but that's easily resolved :) 00:22:53 really not? oh! 00:23:00 it is cheap for both DCs and TRs and it can survive the failure of any n-k servers 00:23:06 sorry n-k TRs 00:23:43 I'd also like to know if secret-sharing can be implemented in a similarly efficient and forward-secure way as the current code uses 00:23:52 just so i'm following correctly, by failure do we mean 'missing' or 'byzantine'? 00:23:59 Yes 00:23:59 byzantine 00:24:01 fair enough nickm, one drawback is that implementation is somewhat more complex 00:24:02 Both 00:24:08 ok 00:24:15 ohmygodel: a little complexity, I know will be there... 00:24:15 armadev: i meant missing 00:24:28 ... but adding a bunch of logic to the critical path would be sad 00:24:37 byzantine adversary can cause outputs to be garbage 00:24:42 so if incremending a counter gets much slower, that would be bad 00:24:49 *incrementing 00:24:58 Which is why we split the data collectors into independent subsets 00:25:03 and if forward secrecy gets much worse, that would be bad 00:25:22 but the secret sharing is just for the secrets part, not the counter increments right? 00:25:23 incrementing the counter would be the same as before 00:25:39 nickhopper: nobody knows, there is no spec 00:25:43 forward secrecy at the DCs would be the same 00:26:09 yes nickm, this definitely needs to be written up 00:26:33 So if we were to do k-n secret sharing, we could split the secret, then encrypt, and wipe the original secret 00:26:46 I think that gives us forward secrecy 00:26:59 but to make it a bit clearer the way I envision it, it would work like this: 00:27:08 1. DCs do blinding with TRs as before 00:27:14 2. DCs increment counters as before 00:27:37 3. DCs send k-of-n secret shares of blinded value to TRs 00:27:54 4. TRs add current blinded values to received secret shares 00:28:24 5. TRs reveal secret shares to each other (or some designated party) to allow reconstruction of the secret, which is the desired aggregate value 00:29:04 1.5 The noise gets added into the counter before (sorrry I skipped this between steps 1 and 2) 00:29:48 Then as long as k TRs are online and reveal their shares, the secret can be reconstructed 00:30:01 And as long as no more than k-1 TRs collude, no private inputs can be learned 00:30:30 I am confused about the meaning of "blinded values" and "secret shares". Which steps produce which of these? 00:31:04 by “blinded value” I meant the value stored in the counter, which include a blinding value, the noise, and any increments 00:31:52 “secret shares” are produced by DCs from the blinded value (aka the counter) in step 3 (which is at the end of the measurement period) and sent immediately to the TRs 00:33:39 So do we need a spec for k-n secret shares, and a spec revision to prop280 that uses them? 00:33:56 yeah 00:34:00 lol 00:34:20 Any volunteers? Otherwise I will just note them down as actions 00:34:26 and what they get us is that some talliers can fall out of the picture but we can still recover aggregate values? 00:34:46 ok yeah I got the steps mixed up 00:34:47 me, maybe. Have to think about it ;) 00:34:47 wait, i am confused. 00:34:54 ohmygodel: i dont understand your proposal 00:34:55 nickhopper was right 00:35:02 the blinding value is added to the blinded value. 00:35:05 the secret sharing happens at step 1 00:35:19 If any single DC is broken, its part of the blinding value won't be recoverable 00:35:40 (that was the answer to "volunteer?") 00:35:41 *of* the blinding value (only one is produced, not pairwise as before) 00:35:43 if the blinding value (i.e., the random value added to the counter to make it appear random upon inspection) is not secret shared, and som TRs holding those go offline, and the blinding values are not secret shared, how can we reconstruct them? 00:36:07 right yes robgjansen 00:36:25 sure we can reconstruct the final blinded value... 00:36:29 Ok, so we need: 00:36:41 but we also need to reconstruct the blinding value in order to remove it 00:36:43 ok let me try again 00:37:11 1. Each DC chooses a random blinding value, send secret shares to the TRs, and adds the blinding value into the counter 00:37:14 ohmygodel: maybe try in a specification ? 00:37:17 :D 00:37:25 2. The DC increments the counter as before 00:37:40 1.5. The DC adds noise 00:37:44 3. The DC adds in noise to the counter 00:38:18 ohmygodel: ahh, i missed in step 1 that the blinding values are also secret shared 00:38:37 seems ok to me then 00:38:37 4. At the end of measurement, the DCs broadcast their counters / send them to a Tally Server / send them the TRs / whatever 00:39:08 5. The TRs add their shares (actually just those shares from DCs didn’t fail before broadcasting their counters) 00:39:20 6. The TRs broadcast their secret shares to reconstruct the secret 00:39:26 Ok, so for forward secrecy, it's best that the noise is added before any increments (1.5, not 3.) 00:39:38 +1 00:39:38 7. The secret (aka the sum of the blinding values) and the broadcast counters get added to yield the aggregate 00:40:15 And for state management, it's best that 1. becomes "encrypt secret shares to the TRs" 00:40:25 And then all the data is sent in one hit at the end. 00:40:29 Let's do this in a spec 00:40:36 #agreed 00:40:39 yes teor that seems right 00:40:44 oh yes please 00:41:01 (sorry, i don't know how to use meetbot) 00:41:03 #action write a k-of-n secret sharing spec 00:41:25 #action revise prop280 to use k-of-n secret sharing 00:41:28 (I hope that works) 00:41:49 so that sketch also includes my suggestion to deal with DC failures - just have the TRs use only the shares from DCs that successfully submitted their stuff at the end of measurement 00:42:09 how do we handle DCs being deliberately junky? 00:42:17 what if there's disagreement about which DCs successfully submitted their stuff? 00:42:19 Let's move onto the next question, because we have 20 minutes ledt 00:42:29 2. How do we make sure the protocol survives outlier or broken measurements from relays? 00:43:06 this questions depends on the ration of broken measurments to all mearsurements, I think 00:43:31 ok so for this question, the subset idea seems like a fine one to me 00:43:40 The current proposal is to split the DCs into multiple independent subsets, calculate an aggregate for each subset, and then take the median (or whatever) 00:43:51 defining "broken" as byzantine, yes? 00:44:06 here broken is byzantine, yes 00:44:27 If we make the subsets depend on an shared random value released *after* results are submitted, then relays can't game their subsets 00:45:15 teor: to be fair, there is no spec for doing this part either. The current proposal assumes that the subsets have been constructed and that's that 00:45:18 This also handles a small amount of disagreement about which DCs submitted, for example, if a DC crashes during results ipload 00:45:26 fun math. (notice that to game median, you only need to get a liar into half of the subsets) 00:45:35 Yes, that's true 00:45:59 But you have to get the right number of liars in each subset 00:46:07 and of the course the size of a subset .. 00:46:08 1 = right number 00:46:30 if all you are worried about is disrupting the result 00:46:40 yeah this is a bit of a sad hack to deal with the lack of robustness against bad DC inputs in the protocol 00:47:35 #action update the proposal to deal with post-submission shared-random-based relay subset selection 00:47:51 ^ in reply to nickm 00:48:16 thanks 00:48:36 I really don’t think it can handle a strategic adversary 00:48:54 No, but neither can the current statistics, tbh 00:49:15 because in order to have good statistics you want reasonably large subsets, which means an adversary is likely to be in it 00:49:41 makes sense. alas 00:50:03 We have about 10 minutes left, so let's leave that for future research? 00:50:04 we can’t just have a huge number of subsets, because in the limit that is just releasing per-relay statistics, which is what Tor does now 00:50:19 teor: I do want to mention something about this 00:50:32 You must account for the number of subset outputs that are being produced when generating noise 00:50:45 right, more subsets means more noise 00:50:48 k subsets = k times the noise per subset to get the same privacy guarantee 00:51:13 #action increase the noise added in the spec for each subset of relays that produces a result 00:51:18 and that’s the real reason to limit the number of subsets 00:51:40 3. How do we make sure the added noise is sufficient, particularly as we add and remove statistics? What if we think more noise is safer? What if we want to add different noise? (This might be out of scope) 00:52:16 well, it's essential if we want to deploy 00:52:32 Do we have a basic idea of how version upgrades will work? 00:52:58 between sets of statistics? 00:53:17 Yes 00:53:22 straw person #1: we treat all relays doing the wrong version as bad, and discard their votes 00:53:29 The current proposal says that TRs can add zeroes for missing counters, and then notes that they will need to add noise as well 00:54:13 But we also need minimum thresholds for activating a new statistic (and removing an old one) 00:54:36 in physics you would choose armadev's version ;) (sorry I'm physician) 00:54:46 teor that seems like a good approach to me: one stats regime at a time, switchover when enough have upgraded 00:54:58 For example: when a new counter is supported by 10% of relays, report it. When an old counter is supported by < 5% of relays, remove it. 00:55:00 Samdney: "physicist"; physician is different ;) 00:55:05 Or we could say "set of statistics" 00:55:31 (oh! thank you nickm, my english!) 00:55:33 teor: in practice we could have that be in the consensus, and let the authorities decide 00:55:36 I think it's less complex and less risky to switch an entire statistics set 00:55:46 Samdney: your english is still better than my anything-else :) 00:56:02 teor: but let's think though 00:56:15 so everybody adds enough noise as if 100% of relays are reporting, even when only 10% of them are, and the rest are filled in as 0's? 00:56:20 But it makes for slower upgrades 00:56:32 armadev: the noise is independent of the number of relays 00:56:33 this would mean that if we were on statistics set X, we would never learn statistics from routers that did not support set X. 00:57:02 But if such routers had a different value for some counters within X, we would not see their values 00:57:03 Indeed. Which is very sad. 00:57:16 if we were looking for signs of an attack, a clever attacker would just attack the old routers 00:57:33 so, here's a countersuggestion: 00:57:36 or heck, add old routers to move us back to statistics set X-2 00:57:45 let there be multiple named sets of statistics. 00:57:53 each set can be turned on or off independently in the consensus 00:58:25 So the problem with this is that the noise is a function of the entire set of statistics being collected 00:58:46 * noise distribution 00:59:48 that is tied into my question 4 00:59:51 So it's not safe in the general case to combine new stats with old stats 01:00:00 Or quick stats with slow stats 01:00:27 unless we run the whole apparatus in parallel, one for each type of stat 01:00:34 what exactly are "quick" or "slow" stats? 01:00:39 just gonna writ that 01:00:40 and make sure none of our stats are dependent on each other 01:00:43 you could divide the “privacy budget” (i.e. the noise allocation) evenly among the sets of statistics that are available at a given time 01:01:14 Samdney: stats collected over different periods 01:01:15 samdney: quick ones would be one where the numbers each relay publishes have to do with a small period, and slow ones would be for large periods 01:01:24 but you need that number to stay constant 01:02:04 Is there a formula that actually works as sets of statistics evolve? 01:02:34 nickm: the way we handled that was that our privacy definition only covered a given period of time 01:03:19 Ok, so we need to do something about continuous collection? 01:03:20 that is, we hide some amount on “activity” (i.e. making a circuit, sending bytes) within some period of time (e.g. 24 hours) 01:04:25 so reasonable activity within k hours should not be discernible from the statistics 01:04:31 ohmygodel: re: evolving stats: can you fix the sigma values for the old statistics, and then put all the new privacy budget on the new statistics? 01:04:54 Is this a calculation that is easy to automate? 01:05:22 teor: yes you can, although changing the privacy budget allocation required a delay period between the two collections 01:05:55 (i guess you could implement the delay by the talliers deciding not to tally) 01:06:03 the reason for that was your k hours of activity might span the two collection periods running under different budget allocations, which could violate the privacy guarantee 01:06:19 lots of moving parts here 01:06:21 if we have versions in the consensus, we can have a version "off" 01:06:56 but if we regularly turn some statistics off, that means we can't use statistics for ongoing incident detection so well 01:07:03 nickm: I think the answer is yes. For example, everything is basically automated in PrivCount now, except choosing exactly which statistics you want to collect (which requires a human to decide). 01:07:23 do those statistics require any annotations? 01:07:55 They need a “sensitivity” (the max amount by which the limited amount of user activity can change them) 01:08:14 how do we derive that value? 01:08:31 ohmygodel: and an expected value 01:09:01 nickm: existing statistics about user activity 01:09:08 teor: right, for accuracy, some guess about the likely value will help optimize the noise allocation 01:09:43 sounds like there's a bootstrapping issue there...? 01:09:52 nickm: or estimates, or the amount of activity we *want* to protect 01:09:56 what are the risks if we just make a wild-assed guess? 01:10:12 nickm: In general, you have to reason about it, but often there are just a few sensitivities shared across many types of statistics (that differ in ways irrelevant to the sensitivy) 01:10:40 Either: exposing as much information as relays currently do, or a signal that's swamped by the noise 01:11:21 Oh, but you get an aggregate, so the "too little noise" case is still better than tor's current stats 01:11:27 nickm: you might have a very noisy (aka inaccurate) answer, which you will likely recognize because you know the noise distribution 01:11:28 yeah 01:11:52 teor: there is no privacy issue by choosing the expected value incorrectly 01:12:01 without good ideas of expected values for noise "optimization", you risk some counters having too much noise 01:12:17 what about the sensitivity? 01:12:22 but after collection, you can always compute the fraction of the result that noise accounts for 01:12:42 sensitivity must be right or the differential privacy guarantee may be violated 01:12:47 if its too high, use your updated estimate in the next round 01:13:34 #action specify how to estimate sensitivity and expected values for each counter, and how to turn that into a set of sigmas 01:14:35 #action specify how to safely change the set of counters that is collected (or the noise on those counters) as new tor versions that support new counters are added to the network (and old versions leave) 01:15:10 Is that a good summary? 01:15:27 Do we have time to move on to question 4, or do we want to leave that for later? 01:15:28 4. (how) do we measure different statistics over different time periods? 01:15:37 I’m fine discussing it. 01:15:50 it seems related to the discussion we had on 3 01:15:54 (for the record, I think having only one set of statistics at a time will be trouble.) 01:15:59 (from a deployability POV) 01:16:04 agreed 01:16:12 +1 01:16:19 i also worry about cases where we have a whole lot of number.s like the per country counts. 01:16:39 I agree. I'd like there to be a way to safely add and remove individual counters as needed. 01:16:51 armadev: it might turn out that those counters were never safe 01:17:02 and have old relays still report counters. 01:17:06 (oh, and for the per country thing, which geoip file each relay is using fits into it. ugh.) 01:18:12 nickm: An easy way would just be to treat each set of statistics as independent. That is what Tor does currently. We tried to do better by considering how user activity can affect all statistics being collected, but maybe incremental progress is better. 01:20:01 armadev: I agree that it isn’t clear how private and accurate the entire collection of Tor statistics could be if it was all ported to using differential privcay. 01:20:02 we don't want it on today's list of questions, but we might also want to pick a policy where relays only collect things that we'd be ok having published per relay, if stuff breaks 01:20:02 so the privcount design assumes some kind of worst-case about how user activity is exposed by non-independent statistics? 01:21:02 armadev: I would further add the some individual statistics are unlikely to be able to be collected with reasonable accuracy and reasonable DP privacy (e.g. countries with few users, as Karsten discovered). 01:21:50 nickm: yes 01:22:18 I wonder how close we are to the worst-case here 01:22:20 nickm: yes, it considers how much user activity (given that it is within some limits) can affect each statistic and then takes the worst-case view that possibly *all* stats could be simultaneously affected by those amounts 01:22:59 does that mean that separating it, i.e. running several in parallel and assuming they're not correlated, can really reduce the amount of noise that it feels it needs to add? 01:23:19 s/it/statistics/ 01:24:07 Well, it reduces the differential privacy guarantee 01:24:16 yes, treating different sets independently can reduce the amount of noise it feels it needs to add to each one 01:24:33 but we'd best be right about the independence 01:24:47 Well, it would be no worse than the current state 01:25:00 except if we decide to collect something new, which we weren't comfortable collecting before 01:25:01 but yes, it means the DP guarantee only applies to each set and not them simultaneously (although DP composes, and so it doesn’t just explode, it degrades) 01:25:28 We could probably deal with that as a first cut 01:25:37 Particularly if the number of sets was small 01:26:15 For example, we have 5 different tor versions with significant presence (> 10%) on the network: https://metrics.torproject.org/versions.html 01:26:53 If we added 1-2 sets of statistics per version, then we'd be looking at about ~8 sets of simultaneous statistics 01:27:01 armadev: It seems to me like most statistics will not be independent, and so maybe it is better just to call this accepting a lower privacy guarantee than assuming they are independent. 01:27:28 although from a design perspective, it would make sense to group related statistics together 01:27:34 yeah. I wonder if we could define sets logically rather than per-version 01:27:39 s/would/would still/ 01:27:40 ohmygodel: yeah. even for things that seem quite different, like "user counts" and "bandwidth use", they won't be independent 01:28:46 nickm: for example, we might have a "version 3 onion service" set in 0.3.3, and then an "extra version 3 onion service set" in 0.3.4 01:28:55 defining sets logically is better because the noise will be better suited for them too 01:29:07 and a "basic bandwidth usage" set that's very stable over time 01:29:12 nickm: could we limit the number of simultaneous sets active at a given time ? 01:29:32 ohmygodel: programmatically or via good sense? 01:29:33 It would seem that the consensus and the protocols would be the way to do this 01:29:58 There's no point in collecting sets supported by very few relays 01:29:59 programmatically using good sense :-) ? 01:30:37 by that I mean, could we say “Tor will allow no more than 5 sets of statistics to be reported at a given time” ? 01:30:46 Or, maybe it's better to say "there's no point in collecting sets only supported by relay versions we don't support" 01:30:57 and then the Tor (via the DCs and/or TRs) would enforce that ? 01:31:42 Yes, it's possible. We could use the existing protocol version infrastructure for that 01:32:28 join #tor 01:32:33 oops 01:32:36 because then we could just build that into the privacy budget: 1/5 of it for each possible set 01:33:39 the total number of sets we want may increase over time though 01:33:52 changing that budget dynamically could also be done, but it should have some limit so that an adversary can’t destroy all stats by running all Tor versions and making the per-version budget too low 01:34:16 why don't we estimate how many sets we think we'll have, build it into the privacy budget, and then *if* we go over, add new sets using the new budget> 01:34:35 or maybe have the counter/budget/something values in the consensus? 01:34:46 then we degrade slightly if we go over, but it's less complex, and less dynamic 01:35:01 do we want to build in some kind of mechanism to allow for a "reset" of the sets and privacy 01:35:15 I don't understand 01:35:36 a "default button"? 01:35:53 you guess ahead of time that you want 8 sets and your privacy budget is 200... 01:36:03 after 1 year you realize you were way off 01:36:20 and you actually have 30 sets and need a budget of 2000 01:36:28 Ok, so this is what a consensus parameter would be useful for 01:37:00 #action specify the privacy budget parameter that we need to turn into consensus parameters 01:37:57 consensus params make me nervous because they are potentially new every hour, and relays don't necessarily have history of them 01:38:17 if not consensus params, some similar mechanism? 01:38:27 yep. something. another moving part. :) 01:38:49 like the shared-random-value that the subset-creation module needs. hoof. 01:39:28 armadev: https://www.youtube.com/watch?v=jy5vjfOLTaA 01:39:30 I’d just like to mention again that while changing the budgets and allocations over time can be done, it requires some mechanism to make the guarantee apply. Options include (1) enforce a delay between measurement periods (what we do now), (2) reduce accuracy temporarily, (3) change the privacy guarantee to apply to activity within a specific time period (and not activity over any time period of a given length). 01:39:46 armadev: we already have a suitable shared random value 01:41:05 do we? does it go public at the time we need? 01:41:21 i mean, i agree we have one. i'm not yet convinced it's suitable. 01:41:54 (we should probably do what it takes to make it suitable. but that might involve constraints that make us sad.) 01:42:26 We might need to have some that last a long time, or have some way to get old ones, or such 01:42:27 ohmygodel: even though privcount does (1) now, i think that's not the best for a continuous deployment 01:42:46 armadev: The TRs could do subset selection among themselves as well at the end of the collection period 01:43:02 armadev: it might mean waiting for 12 hours for stats, because it's secure as long as we don't know the final set of reveals, which can be revealed 12 hours before the SRV 01:43:20 robgjansen: yeah, I actually think (2) might work better. 01:43:40 ohmygodel is right, there are protocols that let you select subsets as long as one party is trusted 01:43:42 that way you don’t lose statistics, some just get a bit blurrier for a bit 01:43:53 I'm afraid I need to sign off soon 01:43:57 big fan of (2) 01:44:23 so, we have a bunch of #action lines, and no names attached to them. what could go wrong? :) 01:44:25 agreed about (2); (3) leaves room open for privacy attacks 01:44:41 #action specify how to maintain privacy guarantees when the set of statistics changes, probably by reducing accuracy 01:44:59 robgjansen: indeed, woe to those who cross the International Privacy Line 01:45:15 :) 01:46:17 Ok, any more to add before I turn off the meet bot in about 1 minute's time? 01:46:32 Nothing from me 01:46:40 Thank you everyone for helping us make tor's statistics better 01:46:50 i was wondering about a 'question 6', something like 'is there sufficient evidence that this whole thing is worth the trouble' 01:46:51 yaay. 01:46:57 :) 01:47:05 (i hope the answer is yes, but we would be wise to collect enough of it first probably) 01:47:07 well, it depends how complicated we make it 01:47:36 I have the feeling it become very complicated ;) 01:47:36 i think in the long term it makes our existing statistics more private, and makes collecting other statistics safely possible. 01:47:37 i do agree that getting rid of our current stats, which are probably harmful in surprising ways, would be good if we can get 'most' of it still in this new way 01:47:53 but the complications are all tbd right now IMO 01:48:12 I suspect that we might see the next version of the spec and look for places to simplify 01:48:27 the in-tor implementation for the current prop280 is dead simple, fwiw 01:48:44 armadev: There are attacks right now using Tor’s current stats (HS guard discovery). 01:49:30 yep 01:49:46 privcount is better than current methods 01:49:55 i assume that when we go to shift things over to privcount, we will... we will decide that none of them can be collected? lots of questions remain :) 01:50:13 do we want to keep it simple and make progress, or try to wait until we have the perfect solution 01:50:40 can we have both? ;) 01:50:40 Also, I believe that most of your statistics could be gathered using this sytem with similar utility and no worse privacy (actually better because the privacy methodology isn’t ad hoc). 01:50:43 like, say, the per relay bandwidth stats 01:50:56 i imagine when we go to do that in privcount, we will do away with per relay bandwidth stats 01:51:10 robgjansen: not wait for perfection, that's for sure 01:51:24 armadev: yes, an exception is per-relay statistics that you actually want per relay 01:51:26 IMO, make simple progress now and don't try to desgin the perfect end-all solution; we can update to better solutions later 01:51:26 (note for the metrics team: we will not strand you suddenly) 01:51:38 where better solutions != privcount 01:51:56 well, i'm not clear whether this k-of-n thing is still privcount or not :) 01:52:02 but that's up to you 01:52:14 ok, i need to sign off. i'm getting silly 01:52:21 good night everyone! or good morning as the case may be! 01:52:26 good night ! 01:52:30 night 01:52:31 goodnight! 01:52:34 thanks all 01:52:42 so is there a plan here ? 01:52:48 teor: are you going to commit tweaks to prop280? i made some too while reading it (grammar etc) 01:53:09 i assumed that the plan was that teor had a plan, since he's been saying #action without trying to attach names to things :) 01:53:28 aramdev: #23492 01:53:34 great 01:53:40 You can add to my branch 01:54:01 And re: the plan, I don't know how to split up the workload 01:54:29 teor: yeah, suddenly none of us is working on transitioning PrivCount :-/ 01:55:13 Sorry, that smiley looks more malevolent than I expected 01:55:27 I think we will work it out over the next few weeks, or at the dev meeting. Network team is focused on 0.3.1 and 0.3.2 right now. 01:55:53 Anyway, I think that's a good point to end the meetbot 01:55:56 #endmeeting