Started logging meeting in #olpc-admin, times are UTC.
[20:02:31] <hhardy> yay
[20:02:39] <hhardy> this thing is very cool btw
[20:02:45] <lfaraone> hhardy: Very.
[20:02:49] <hhardy> we have a secretary bot
[20:03:20] <hhardy> #topic VIG meeting of 2008-10-21 16:00 EDT
[20:03:33] <hhardy> #topic Agenda
[20:03:58] <hhardy> Agenda
[20:03:58] <hhardy> secretary, meetbot, minutes
[20:03:58] <hhardy> Wiki and website scalability/mirroring -- deployments + G1G1 are coming! awstats, betterawstats -- someone needed to work on configuring and automating coral CDN -- http://www.coralcdn.org/ owl and swan
[20:04:02] <hhardy> download.l.o scalability/mirroring build system -- weka Google, archive.org others?
[20:04:05] <hhardy> Torrent capability in browse--lfaraone
[20:04:08] <hhardy> Gforge--mchua proposal
[20:04:10] <hhardy> State of rt rt instances for Adric and private
[20:04:13] <hhardy> new business
[20:04:33] <hhardy> well dogi not here atm but his bot is admirable and he or someone has posted minutes from last week so all is good
[20:04:55] <hhardy> any additions to agenda?
[20:05:32] <hhardy> rude robot kicked me for flooding
[20:05:37] <hhardy> kicks meetbot
[20:05:43] <hhardy> someone say something
[20:05:47] <mchua> something!
[20:05:53] <hhardy> lol
[20:06:05] <hhardy>
[20:06:16] <hhardy> rise of the bots
[20:06:47] <lfaraone> hhardy: I havn't had a chance to produce anything on the torrent-side, so that'll be a quick discussion
[20:07:18] <hhardy> #topic Wiki and website scalability/mirroring -- deployments + G1G1 are coming! awstats, betterawstats -- someone needed to work on configuring and automating coral CDN -- http://www.coralcdn.org/ owl and swan
[20:07:37] <hhardy> lfaraone: ok looking forward to your thoughts
[20:07:50] <lfaraone> hhardy: Well, we should have the default browse links go to a coral page.
[20:07:56] <hhardy> coral cdn looks very cool
[20:08:01] <hhardy> sort of a poor mans akamai
[20:08:18] <lfaraone> hhardy: Yeah. But gah, is it slow.
[20:08:25] <lfaraone> #link http://wiki.laptop.org.nyud.net/go/The_OLPC_Wiki
[20:08:35] <hhardy> I didnt get a grasp of whose machines it runs on, is it peer to peer?
[20:08:48] <lfaraone> hhardy: NYU.
[20:09:04] <lfaraone> hhardy: "A preliminary deployment of CoralCDN has been online since March 2004, running on the PlanetLab testbed. As of January 2006, it receives about 25 million requests per day from more than 1 million unique clients. "
[20:09:22] <lfaraone> hhardy: Keep in mind they run google ads on the side of pages.
[20:09:45] <lfaraone> hhardy: But if we really wanted to we could write some JS to counter their ads-JS.
[20:10:02] <hhardy> planetlab seems to have a similar relationship to princeton as olpc to mit
[20:10:38] <hhardy> we could talk to princeton and google about that
[20:10:47] <hhardy> not we=VIG but olpc mgmt could
[20:12:05] <hhardy> a content server should serve as a starting point for distributed networks like ptop and akamai like
[20:12:28] <lfaraone> hhardy: want to make that an #action ?
[20:12:29] <hhardy> we need to get some folks from ouosl into these meetings
[20:12:38] <hhardy> maybe they can host a content server over there
[20:13:07] <hhardy> lfaraone: sounds like good idea yes
[20:13:10] <Culseg> hhardy: why not approach Akamai to 'volunteer' some connections
[20:13:21] <hhardy> there have been some discussions with them in the past
[20:13:27] <hhardy> as they are local
[20:13:52] <lfaraone> #action Talk to Princeton + Google about CoralCDN
[20:14:03] <hhardy> I #action Talk to Princeton + Google about CoralCDN
[20:14:10] <hhardy> #action Talk to Princeton + Google about CoralCDN
[20:14:15] <hhardy> is meetbot alive
[20:14:23] <lfaraone> hhardy: Yeah.
[20:14:36] <lfaraone> hhardy: It's silent. http://meetbot.debian.net/meetbot/olpc-admin.20081021_2002.html
[20:14:42] <hhardy> glares suspiciously at bot
[20:14:48] <lfaraone> hhardy: And apparently only you can propose #action s.
[20:14:59] <lfaraone> (or whoever #startmeeting-s)
[20:15:00] <hhardy> me and dogi
[20:15:03] <hhardy> yea
[20:15:39] <hhardy> I did talk to SJ about Internet Archive
[20:15:54] <hhardy> there are certain kinds of content they are interested in storing
[20:15:57] <lfaraone> hhardy: they'd be great for the wikipedia archives possibly?
[20:16:06] <kimquirk_> hhardy: where are we on the agenda?
[20:16:06] <hhardy> not for instance meeting notes of VIG
[20:16:17] <kimquirk_> scaling?
[20:16:28] <hhardy> we are talking about coral cdn
[20:16:33] <hhardy> and alternatives
[20:16:45] <kimquirk_> did we discuss owl/swan/weka?
[20:16:50] <kimquirk_> (can't tell where that is on the agenda)
[20:16:52] <hhardy> its the last part of the posted topic
[20:16:59] <hhardy> yes scalability
[20:18:02] <hhardy> last part of that is to say somethng about owl and swan
[20:18:17] <hhardy> these are meant to be hot backups for crank and pedal
[20:18:38] <hhardy> right now I have them booting to the mirror but ethernet not works
[20:19:03] <hhardy> I want to work in this tomorrow from 1-5pm edt so if anyone is purking on this channal I might have questions
[20:19:19] <hhardy> s/lurking/purking/
[20:19:42] <kimquirk_> richard will help with eht?
[20:19:50] <hhardy> he said he was willing
[20:19:51] <kimquirk_> (it is hard to type the in as eth)
[20:20:18] <kimquirk_> do we want to discuss other options?
[20:20:29] <kimquirk_> owl/swan as a pair to replace pedal?
[20:20:38] <hhardy> also gnu or paulproteus if you read this log come lurk tomorrow
[20:20:45] <kimquirk_> or wait until next week
[20:20:56] <hhardy> yes plan b
[20:21:13] <hhardy> if mirroring them to other hardware proves to be unfeasible with reasonable effort
[20:21:30] <kimquirk_> edmcnierney: cjb: m_stone: what do you think of using owl/swan as a pair? instead of owl/crank and swan/pedal?
[20:21:45] <hhardy> kimquirk suggested moving services off of pedal to one of the new machines
[20:21:46] <kimquirk_> do you think that will be easier to get up and running as hot backups?
[20:21:53] <edmcnierney> Sorry - just got here after wondering what was (not) happening on irc.freenode.net......
[20:21:54] <hhardy> that would be mail, websites, wikis
[20:22:19] <hhardy> sorry ed, not trying to hide out its a legacy predating me
[20:22:39] <hhardy> we are at owl and swan in agenda
[20:23:04] <hhardy> I dont know why this chan was set up here instead of there
[20:23:22] <hhardy> discussing another way to do mirroring
[20:23:24] <edmcnierney> OK, thanks. Should we just have hhardy, m_stone, cjb, and edmcnierney get together in person to work out a plan?
[20:23:27] <kimquirk_> edmcnierney: just to give you some background... owl and swan are hefty machines but not exactly the same as pedal and crank
[20:23:50] <hhardy> migrate everything from one of crank or pedal to a new machine
[20:23:52] <edmcnierney> Yes, Henry filled me in a bit when he first looked for assistance on this transition.
[20:24:18] <hhardy> then use the freed up machine is the mirror of the ramining one
[20:24:26] <hhardy> and the two new machines as mirrors
[20:24:49] <edmcnierney> Do we have a document describing what services are provided on which machines now?
[20:25:05] <hhardy> there is for pedal, dev needs to be documented
[20:25:17] <hhardy> #ACTION ITEM dev needs to be documented
[20:25:40] <hhardy> dev=crank
[20:25:46] <hhardy> those names are interchangable
[20:26:15] <kimquirk_> hhardy: whose action is this?
[20:26:21] <hhardy> mine
[20:26:26] <kimquirk_> ok
[20:26:50] <hhardy> I sent myself a ticket to sysadmin
[20:26:50] <kimquirk_> i thought that already existed.
[20:27:06] <edmcnierney> Thanks, hhardy. It's not immediately obvious what the complete set of services running on dev/crank are, so that's important.
[20:27:09] <hhardy> well if it does I will merge them
[20:27:13] <kimquirk_> do we also need a network diagram... or is this really easy to figure out
[20:27:25] <hhardy> I think it would be useful
[20:27:39] <kimquirk_> (which is not to say that I have the network diagram in my head... so I would appreciate it)
[20:27:40] <hhardy> it is nice for conversaitons to point at this or that
[20:28:06] <kimquirk_> hhardy: so this week, while you and smithbone figure out ethernet, can you also get this documentation.
[20:28:27] <kimquirk_> then the next step might be to convene all the roots in the office for a good discussion with concrete next actions to get the job done
[20:28:31] <hhardy> some of it I will have to confirm with cscott
[20:28:40] <kimquirk_> he will probably answer email
[20:28:43] <hhardy> the back end stuff in builds, and activation and digning
[20:28:47] <hhardy> *signing
[20:28:52] <hhardy> yep
[20:29:18] <hhardy> kimquirk all for that
[20:29:27] <hhardy> maybe I should bake cookies :)
[20:29:36] <kimquirk_> i feel like we've had action items and lists and plans for these machines for a long time... do you think we need to do that all again... with the express purpose of 'just gettign it done'?
[20:29:51] <kimquirk_> cookies are good.
[20:30:09] <hhardy> I dont think its an all hands on deck thing unless we decide that the original plan isn't workable
[20:30:18] <hhardy> then everyone will want their say on it
[20:30:39] <kimquirk_> hhardy: btw - do you have a link to the original plan? or is it meeting notes?
[20:30:52] <hhardy> just irc logs
[20:30:55] <kimquirk_> (i'm not sur I know the original plan)
[20:31:00] <kimquirk_> anymore
[20:31:01] <hhardy> we dont have nicely formatted ones going back that far
[20:31:20] <hhardy> the origianl plan is: owl mirrors crank/dev
[20:31:25] <hhardy> swan mirrors pedal
[20:31:31] <kimquirk_> hhardy: but it seems like we had a plan and it would be good to reference that one plan ... perhaps it should go into the network diagram of the future
[20:31:39] <kimquirk_> so a diagram of today... and one of the future.
[20:31:39] <hhardy> if one goes down, we bring up the other while we work on it
[20:31:50] <hhardy> that was the simple original plan
[20:32:04] <hhardy> ok
[20:32:28] <hhardy> #ACTION ITEM hhardy to prepare 2 network diagrams, current and future
[20:32:41] <kimquirk> feels like I just traveled into the future
[20:32:43] <kimquirk> (or the past)
[20:32:55] <hhardy> perhaps you did :)
[20:32:58] <kimquirk> too much heros
[20:33:13] <hhardy> thought you didn't believe in those ;)
[20:33:22] <kimquirk> (tv show)
[20:33:27] <hhardy> lol
[20:33:52] <hhardy> cscott: ping?
[20:34:01] <cjb> hhardy: he is not online.
[20:34:09] <cjb> (in this channel, at least)
[20:34:21] <hhardy> download.laptop.org is running on an xo, he and mstone feel that it will work fine
[20:34:39] <hhardy> it has a 1tb disk on it
[20:35:03] <hhardy> I myself wonder how that is going to scale after G1G1
[20:35:13] <cjb> we talked about this last week
[20:35:16] <cjb> do you have new questions?
[20:35:57] <hhardy> none that can't be answered by logging onto the machine and reading logs
[20:36:13] <cjb> Scott expressed a belief that the load on download.l.o will not increase dramatically during G1G1, because there is nothing on it that will be of regular interest to G1G1 users.
[20:36:14] <lfaraone> hhardy: it's still running on a B2? Lol.
[20:36:17] <hhardy> this probably didn't need to be cut and pasted from alst weeks agenda
[20:36:24] <cjb> I still concur with his belief.
[20:37:00] <hhardy> any news on weka?
[20:37:02] <kimquirk_> sorry i missed the last few items (not sure what is going on with my machine)... cjb - what is our scaling solution for download.l.o?
[20:37:13] <kimquirk_> (or did someone answer that)?
[20:37:18] <cjb> I did, I'll pate
[20:37:19] <cjb> ste
[20:37:21] <lfaraone> kimquirk_: 16:36 cjb$ Scott expressed a belief that the load on download.l.o will not increase dramatically during G1G1, because there is nothing on it that will be of regular interest to G1G1 users.
[20:37:31] <hhardy> he and cscott and mstone concur that it will scale fine without changes
[20:37:38] <kimquirk_> really?
[20:37:39] <kimquirk_> hmmmm
[20:37:43] <cjb> it's not the updates server
[20:37:49] <lfaraone> cjb: ah, good.
[20:37:59] <kimquirk_> oh... what does it server?
[20:38:09] <lfaraone> kimquirk_: static downloads of disk images.
[20:38:09] <kimquirk_> and why do we want it on an XO? just as an experiement?
[20:38:09] <cjb> it serves jffs2 images and some movies
[20:38:12] <cjb> static files
[20:38:28] <cjb> at the time, we didn't have another server with lots of disk space available
[20:38:37] <cjb> maybe we do now, though
[20:38:38] <hhardy> movies of olpc presentations?
[20:38:40] <cjb> yes
[20:38:40] <kimquirk_> if it serves anything that needs bandwidth, why don't we want it to be served from our colo?
[20:39:12] <cjb> I don't think there is any particular reason, other than that none of this has shown itself to be necessary yet
[20:39:53] <cjb> serving static files by the tens-of-thousand is what web servers do, and we aren't anywhere near that load yet. if it would make life easier to consolidate it onto another server, we could do it for that reason.
[20:40:10] <cjb> hhardy: is there another server with >1TB free that you wish to propose?
[20:40:17] <kimquirk_> probably good to consolidate it for purposes of easier maintenance, backup, etc.
[20:40:22] <hhardy> it would be weka
[20:40:34] <cjb> I don't think that's a good proposal
[20:40:35] <kimquirk_> cjb: what does it have?? an external 1TB disk?
[20:40:45] <cjb> because we told Martin that we would disallow web access to weka
[20:40:49] <cjb> kimquirk_: yup
[20:40:50] <kimquirk_> we should just move the hard drive
[20:40:57] <kimquirk_> to a server off our colo.
[20:41:10] <hhardy> could certainly do that
[20:41:12] <cjb> kimquirk_: that's not usually how it works
[20:41:23] <cjb> rackmount machines use RAID, and want you to use the same size disk
[20:41:32] <cjb> they also sometimes use smaller disks than external hdds
[20:41:46] <kimquirk_> when did we buy this hard drive? what was the purpose originally?
[20:41:56] <cjb> hm, I don't know
[20:42:03] <cjb> I suspect a long time ago
[20:42:05] <hhardy> me neither
[20:42:07] <kimquirk_> i'm just kind of surprised that we are using it this way.
[20:42:26] <cjb> yeah
[20:42:33] <cjb> crank and pedal do not have nearly enough disk space
[20:42:35] <cjb> so that's why
[20:42:38] <kimquirk_> probably not a big deal... but if people expect that to be available and expect good connections... we should figure out the best way to move it.
[20:42:52] <cjb> I think the only prerequisite is the disk space issue.
[20:43:00] <cjb> once Henry solves that, he can work with Scott on moving the content.
[20:43:04] <hhardy> cjb: we could put another coraid 1521 at mit theoretically
[20:43:08] <kimquirk_> what about owl/swan... what disk space are they supposed to be able to get to?
[20:43:12] <hhardy> or the successor
[20:43:19] <hhardy> 4.5 tb
[20:43:19] <cjb> kimquirk_: they're identical to crank and pedal, I think?
[20:43:27] <cjb> hhardy: you mean buy another?
[20:43:32] <hhardy> less whatever is used by mirroring crank and pedal
[20:43:46] <cjb> if we don't have a travel budget, I doubt we have budget for another cluster of hard disks
[20:43:53] <hhardy> exactly
[20:44:19] <hhardy> so the disk on download isn't raided now and it still would not be raided
[20:44:21] <cjb> anyway. I think you can solve the "reserve 1TB somewhere where it won't be needed" problem.
[20:44:28] <hhardy> unless we bought another like one for a mirror
[20:44:28] <cjb> and we can go from there.
[20:44:45] <kimquirk_> ok... let's get the network diagram of our current system and figure out how to move forward... including things like download.l.o
[20:44:51] <hhardy> ok
[20:45:15] <hhardy> lfaraone, we are tabling the torrent discussion correct?
[20:45:16] <cjb> hhardy: crank and pedal have less than a terabyte free between them
[20:45:24] <hhardy> lfaraone: we are tabling the torrent discussion correct?
[20:45:29] <cjb> which suggests to me that their replicas do too
[20:45:35] <hhardy> no
[20:45:56] <hhardy> owl and swan have 4.5 tb usable
[20:46:04] <cjb> oh
[20:46:08] <cjb> so they aren't replicas, I guess
[20:46:13] <cjb> ok
[20:46:19] <cjb> still, the purpose of hotswap machines
[20:46:24] <hhardy> correct that model was no longer sold
[20:46:36] <hhardy> the dl 185s were the closest, same processor etc
[20:46:37] <cjb> is that you can restore one from the other, not that you run one server on your spare that you aren't running on its master
[20:46:49] <lfaraone> hhardy: yes.
[20:46:59] <cjb> anyway, can move on.
[20:47:22] <hhardy> #topic Gforge--mchua proposal
[20:47:40] <hhardy> mchua: did you want to say something about this?
[20:47:55] <mchua> hhardy: absolutely
[20:48:02] <hhardy> please
[20:48:02] <mchua> http://wiki.laptop.org/go/OLPC_talk:Volunteer_Infrastructure_Group#Proposal
[20:48:17] <mchua> it's very short, but for those who weren't here for last week's discussion, background is at
[20:48:36] <mchua> http://wiki.laptop.org/go/OLPC:Volunteer_Infrastructure_Group/2008-10-14#Talk_about_GFORGE
[20:48:59] <hhardy> how would you prefer feedback, in IRC, on mailing list, or wiki markup? or all?
[20:49:04] <mchua> in short, it was proposed that gforge could be useful but that it should be a community-run, community-maintained thing, and that the thing to do was to find a good way to solicit volunteers to do it
[20:49:22] <mchua> hhardy: here is fine
[20:49:29] <hhardy> how would this help OLPC?
[20:49:37] <mchua> hhardy: I'll cross-ref the wiki page to logs of this meeting afterwards
[20:50:11] <mchua> hhardy: it would benefit olpc by moving the burden of selecting and setting up git (and/or other repo) hosting for olpc community projects out to the community itself
[20:50:11] <lfaraone> mchua: I'll be happy to help run gforge, I think it's an excellent idea.
[20:50:36] <mchua> hhardy: which has the secondary benefit of - without the current "set up repo" bottleneck, we may get a larger number of volunteer contributions
[20:50:41] <hhardy> this seems like it replicates important subsystems we depend on
[20:50:43] <lfaraone> hhardy: As well as bug tracking.
[20:51:04] <hhardy> we have message forums and mailing lists, source code management and bug tracking
[20:51:18] <lfaraone> hhardy: Somewhat, but it separates mission-critical (trac, main git) from community git and bugs and mailing lists.
[20:51:27] <mchua> hhardy: the message forums are non-olpc hosted, afaik
[20:51:30] <hhardy> to use this would mean re-engineering pretty much all the backend systems used for development
[20:51:43] <hhardy> who would use it?
[20:51:50] <lfaraone> hhardy: we don't have to change over _offical_ olpc infrastructure right away.
[20:51:58] <lfaraone> hhardy: Anybody who wants to develop an activity.
[20:51:59] <mchua> hhardy: i see it as primarily for volunteer activity development, at least at first
[20:52:11] <mchua> hhardy: i'm not talking about switching over joyride to gforge at all
[20:52:12] <hhardy> have activity developers asked for this?
[20:52:22] <mchua> hhardy: yes
[20:52:34] <lfaraone> hhardy: Very yes.
[20:52:37] <mchua> hhardy: multiple times; last week I had a conversation with the Olin team
[20:52:47] <mchua> (one of our *very* active university chapters)
[20:53:05] <hhardy> what do they not prefer about the current setup?
[20:53:42] <lfaraone> hhardy: Time.
[20:53:53] <cjb> perhaps creating GIT trees could be automated
[20:53:58] <lfaraone> hhardy: It takes quite a bit of time to get a new project up and running.
[20:54:00] <mchua> hhardy: the email basically said "we're doing activity dev this year as a chapter activity; we tried this last year as a software design class assignment (as in, the prof gave 'make an activity' as a final project option)"
[20:54:04] <cjb> give a name, give a description, give a public key, and it gets created immediately
[20:54:24] <lfaraone> cjb: Now, _that_s a bit too liberal. (and begging abuse)
[20:54:25] <hhardy> cjb you want to take that as action item?
[20:54:28] <mchua> hhardy: "we want to do version control in our development because it's the Right Thing, but it's taken Too Darn Long for us to get repos in the past, and we can't wait"
[20:54:29] <cjb> hhardy: no.
[20:54:31] <lfaraone> (in my opinion)
[20:54:41] <cjb> lfaraone: you're saying that gforge would work differently?
[20:54:56] <lfaraone> cjb: Yes, as far as I can tell it can be set up to require some sort of approval.
[20:55:03] <mchua> hhardy: "...so we're going to make our own repos in the school computer lab, but nobody outside the campus network will be able to access them"
[20:55:12] <lfaraone> cjb: (make a community team of a couple volunteers)
[20:55:20] <mchua> (because of campus lab restrictions, not because they wanted to)
[20:55:25] <cjb> lfaraone: then it seems to exactly mirror our current setup.
[20:55:39] <hhardy> I am happy if people in the community want to set up this kind of system
[20:55:51] <mchua> hhardy: that was the proposal
[20:55:54] <hhardy> however my main focus for VIG is to make existing systems work better
[20:56:22] <lfaraone> cjb: The team is much smaller, and if _I_ were OLPC, I'd be reluctant to give out admin control of our master git sever to just anybody.
[20:56:25] <lfaraone> *server
[20:56:30] <hhardy> do we really need git and koji and sugarlabs and now this?
[20:56:46] <lfaraone> mchua: Maybe this is a better project for SL to host...
[20:56:54] <kimquirk_> right
[20:57:13] <cjb> lfaraone: you should know that sugarlabs has so far not decided to host its own source control
[20:57:16] <mchua> hhardy: understood - I agree that OLPC should not host/setup/maintain it at all
[20:57:22] <hhardy> someone want to propose this on IAEP?
[20:57:24] <cjb> it views it as unnecessary duplication
[20:57:40] <cjb> (where it means the rough consensus I noticed last time it was talked about)
[20:57:47] <hhardy> cjb being on board of sugarlabs btw
[20:58:08] <cjb> yeah, although I'm not speaking as someone who makes unilateral decisions for SL.
[20:58:13] <mchua> hhardy: but since this is an infrastructure-related topic it seemed like a good thing to bring it up in VIG first, to make sure effort isn't being duplicated, and toes aren't being stepped on
[20:58:28] <hhardy> it seems like duplications to me
[20:58:30] <cjb> you could attempt to persuade SL to do this. no-one has yet been successful.
[20:58:52] <hhardy> not persuaded yet lets continue the discussion outside this meeting
[20:59:11] <hhardy> adric: ping?
[20:59:16] <mchua> hhardy: no persuasion needed. if the decision is "vig won't do this," then there doesn't need to be a continuation
[20:59:32] <hhardy> I would say there is no consensus that we should do it
[20:59:39] <hhardy> it looks cool
[20:59:46] <mchua> hhardy: but is there a consensus that we should not do it?
[21:00:04] <hhardy> not as a VIG project is my feeling
[21:00:07] <mchua> hhardy: that's what we need to wrap this up - in that case, I would tell the volunteers who asked for this to take the idea elsewhere, or implement it themselves.
[21:00:23] <mchua> hhardy: ok - then that closes out that item on our agenda as far as I'm concerned; no need to have it come up again
[21:00:28] <hhardy> I think its worth discussing on devel list and with sugarlabs
[21:00:40] <mchua> ok, but outside vig, right?
[21:00:43] <edmcnierney> (I need to scoot in a bit - looks like we're nearing the end, and I think the server mirroring topic is my main "please help with this" topic.)
[21:00:51] <hhardy> thanks ed
[21:00:58] <hhardy> any new business?
[21:01:02] <kimquirk_> thanks edmcnierney
[21:01:24] <edmcnierney> We're going to keep this new time, no? It's much more convenient for me.
[21:01:33] <hhardy> I would like to keep it
[21:01:38] <edmcnierney> (i.e. if the answer is yes, then you can expect me to show up :))
[21:01:42] <hhardy> any objection?
[21:01:45] <hhardy> good
[21:02:12] <hhardy> lfaraone: what's the meetbot tag for decided?
[21:02:18] <kimquirk_> good with me
[21:02:50] <lfaraone> hhardy: there isn't one.
[21:03:00] <hhardy> #endmeeting

Meeting ended.

Information on meetbot is available at meetbot.debian.net