18:59:41 <waldi> #startmeeting
18:59:41 <MeetBot> Meeting started Wed Aug 14 18:59:41 2019 UTC.  The chair is waldi. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:59:41 <MeetBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
18:59:50 <waldi> #chair serpent_
18:59:50 <MeetBot> Current chairs: serpent_ waldi
19:00:03 <bootc> waldi, serpent_: wfm :-)
19:00:07 <serpent_> How does bot work?
19:00:21 <kuLa> magically :-)
19:00:22 <waldi> serpent_: https://wiki.debian.org/MeetBot
19:01:30 <serpent_> Thanks
19:01:48 <serpent_> Let's start. I guess the most important topic for now is sprint
19:02:06 <serpent_> #info Sprint at MIT
19:02:14 <waldi> #topic Debian Cloud Sprint 2019
19:02:44 <serpent_> It looks like the only safe date for now is 2019-10-14 till 2019-10-16
19:03:03 <noahm> Those dates work for me.
19:03:04 <serpent_> We have reserved rooms, and the only person (that I know about) who cannot come is Bastian
19:03:15 <noahm> waldi: could you join remotely?
19:03:22 <noahm> at least for a while?
19:03:41 <serpent_> 2019-09-30 till 2019-10-02 slot is gone (room already reserved by someone else)
19:04:11 <waldi> noahm: i found a way to make it work and join you at MIT
19:04:21 <serpent_> Cool!
19:04:26 <Mrfai> great
19:04:50 <serpent_> So can we set this dates and start preparing (i.e. buying plane tickets, reserve hotels, etc.)?
19:05:10 <bootc> I'll attend as best I can over IRC
19:05:13 <kuLa> tho right now I'm not sure if I can make it, but in the worst case I'll try to join remotely
19:05:24 <serpent_> #info No objections for dates 2019-10-14 till 2019-10-16
19:05:41 <waldi> #agreed Sprint 2019 will be at MIT from 2019-10-14 till 2019-10-16
19:05:55 <serpent_> #info kuLa is not sure, but in worst case he'll be on IRC (or other way)
19:06:52 <serpent_> #action I'll get paperwork with DPL to get sponsorship and similar
19:06:56 <Mrfai> I guess we should look for travel costs and send them until $date to serpent_ so you can sum all costs and ask DPL for aproval of costs?
19:07:06 <serpent_> Exactly
19:07:40 <serpent_> Please send me info about costs. I'll ask Sledge and Sam how to do this properly, according to all rules
19:07:43 <Mrfai> any proposal for the hotel?
19:07:53 <serpent_> Any local people?
19:08:17 <kuLa> can you provisionally count me in pls, I'll try to confirm asap if I'm able to attend in person or not
19:08:23 <serpent_> As we are 2 months before sprint, we could try to reserve more rooms, and possibly share them to save costs.
19:08:34 <utkarsh2102[m]> Sam works next to MIT, afaik :P
19:08:37 <serpent_> kuLa: OK
19:08:40 <noahm> I'm formerly local and can provide some help. jproulx is local and can also help with things.
19:09:28 <serpent_> I guess the most urgent thing is to know how many people will come (please send emails!) and then we can try to reserve hotel rooms.
19:10:32 <Mrfai> yes
19:10:37 <serpent_> Does anybody object sharing rooms?
19:11:12 <Mrfai> no, except with snoring people
19:11:23 <arthurbdiniz> serpent_ we should send you all the information on this email: serpent@debian.org?
19:11:27 <serpent_> Or if anyone has special needs (i.e. arriving earlier, leaving later) please put this info in email
19:11:42 <serpent_> Yes, that's the best solution
19:11:51 <kuLa> ack
19:12:35 <serpent_> For snoring - does anyone know if we can get sponsorhip for work-grade ear protectors ;-)
19:12:54 <utkarsh2102[m]> :P
19:13:17 <waldi> serpent_: the simple ones are not enough?
19:13:41 <noahm> people who are sensitive to snoring can sleep next to the white-noise source that is MIT's OpenStack installation.
19:13:48 <serpent_> #action Please send me the info about travel/hotel (and if Debian sponsorship is required) to serpent@debian.org
19:13:50 <utkarsh2102[m]> serpent_: just to be sure, what all details would you need?
19:14:03 <utkarsh2102[m]> Ah, got it :)
19:14:57 <serpent_> Arrival and departure dates, for how many (and which) dates you need hotel, and costs of those if you apply for Debian support
19:15:16 <Mrfai> concerning food sponsorship: IIRC Jonathan said we need sponsors for food.
19:15:20 <serpent_> If not - (i.e. your comapny or someone else will pay) - you don't need to send this info for me
19:15:41 <serpent_> #action I'll update our wiki page with dates of spring
19:15:44 <utkarsh2102[m]> Ack, thanks.
19:16:35 <serpent_> I'll probably need some help from Sledge (as he was dealing with this previous time) but AFAIK this was info we was gathering
19:16:59 <serpent_> Then all other details (scans of receipts, etc.) you'll need to send to SPI
19:17:03 <kuLa> serpent_: I think I can help as well I was doing this for 2 sprints
19:17:24 <serpent_> Cool - I'll send email to you and Sledge (probalby tomorrow or so)
19:17:37 <kuLa> ack
19:17:50 <serpent_> So - what do we plan to do on sprint, or even before?
19:18:30 <noahm> I think we can build a sprint agenda on the wiki. Unless anybody really has anything that needs to be discussed here now.
19:18:33 <serpent_> I guess the most important is to spread the knowledge so more than one person knows how to build and publish images
19:19:06 <Mrfai> People should use our tool in their environment. Then we will see what needs more documentation.
19:19:13 * kanashiro waves o/
19:19:41 <noahm> Mrfai: even within the cloud team, not a lot of people know how the salsa pipelines work.
19:19:48 <kuLa> kanashiro: o/
19:19:48 <noahm> I think that's what serpent_ is getting at.
19:19:49 <serpent_> noahm: agreed, but we can also discuss it here while everyone is here
19:20:02 <serpent_> Exactly
19:20:40 * kanashiro is reading the backlog
19:20:42 <serpent_> #action We need (should?) put sprint agenda on the wiki
19:20:49 <Mrfai> ack
19:21:15 <serpent_> And as you most probably know, Sledge has resigned from being delegate.
19:21:30 <noahm> should we nominate a replacement?
19:21:33 <kuLa> and we probably need a 3rd one :-)
19:21:41 <kuLa> +1 to what noahm said
19:21:59 <serpent_> Sam (DPL) has not yet sent updated delegation, but we should discuss it; possibly before but the best in person during sprint
19:22:16 <noahm> wfm
19:22:18 <kanashiro> Sledge also mentioned that Luca has been busy, so we probably need more delegates
19:22:55 <waldi> we don't have a lot of possible candidates
19:22:58 <serpent_> Yes - but it's still good to have someone from SPI involved as now accounts are under their control
19:23:01 <kuLa> who is keen on  jumping on it?
19:23:26 <kuLa> as a delegate I mean
19:23:57 <serpent_> We don't need to decide now, but think about it
19:23:58 <waldi> serpent_: SPI does not have any access to those accounts, nor have they asked. it just gives the name for now
19:24:19 <serpent_> OK - then it's another item
19:24:33 <kanashiro> we should think about it and discuss during the sprint
19:24:41 <kuLa> serpent_: I know tho would be nice if ppl start thinking about nominations
19:24:42 <serpent_> #action Discuss during sprint how accounts are managed
19:25:33 <serpent_> That's what I mean - let's think about it, and let unconciousness (hi Freud!) work on it
19:28:01 <kanashiro> next topic?
19:29:18 <serpent_> Image finder?
19:29:26 <bootc> May I suggest: status of accounts, in particular the AWS account(s)?
19:29:35 <serpent_> OK
19:29:42 <waldi> #topic Status of accounts
19:29:57 <noahm> AWS accounts are still lost in the beureaucratic maze of AWS administration.
19:30:12 <noahm> I'm trying to get the matter escalated.
19:30:22 <bootc> Is there anything that anyone can do to help?
19:30:34 <waldi> which means we currently only have our legacy jeb owned account
19:30:34 <rvandegrift> serpent_: yes, I'd love to see the status of the image finder
19:30:50 <noahm> If you're an AWS customer, you could contact your support channels and complain about buster not being available.
19:31:12 <noahm> That helps get the fire going under the right people
19:31:34 <bootc> noahm: I can definitely do that. Any names or departments worth mentioning to get it escalated to the right place?
19:32:49 <noahm> Mention that you've raised the issue with Debian and that the issue needs to be addressed through the AWS OS partners integration team.
19:33:04 <bootc> I'll get onto it, thanks noahm
19:33:17 <arthurbdiniz> About the image finder on debconf BoF we talked about the remain features to have a complete project and that feature was Token-based authentication. Now its just a matter of review and aprove this feature to master https://salsa.debian.org/cloud-team/image-finder/merge_requests/39
19:33:22 <serpent_> Other accounts are OK?
19:33:38 <waldi> the Azure migration is completed
19:33:42 <kanashiro> arthurbdiniz, image finder should be the next topic :)
19:33:43 <serpent_> I mean Azure and GCE?
19:33:48 <waldi> we did nothing for google yet
19:34:15 <noahm> Regarding the old jeb account, we have a goal of migrating our CloudFront (CDN) deployment from that acct to one of the new ones eventually.
19:34:16 <bootc> Do we need to do anything for Google?
19:34:16 <serpent_> So Azure is finished, AWS is stuck on Amazon side, GCE is not started?
19:34:29 <waldi> yes
19:34:36 <noahm> But we aren't able to work on that until the new acct is set up. And I don't know who's actually going to do that work.
19:34:58 <kuLa> I have a quick Q, there is a access request to salsa cloud group from Utkarsh Gupta @utkarsh2102-guest and I think this is a 2nd attempt as it previously was denied (I think) any ideas why and if we should grant access?
19:35:02 <serpent_> Is there something that we need from Debian side, SPI side, or Google side?
19:35:20 <kuLa> noahm: I think it was jimi and luca, tho both are v busy atm
19:35:43 <utkarsh2102[m]> kuLa: I am here. You may ask if you want to ask anything about it :)
19:35:44 <kuLa> for GCP I think we need SPI and G
19:35:54 <waldi> kuLa: i would like to see some contributions first
19:35:59 <noahm> yeah. The CDN something I'd be interested in working on as well, but I am also busy...
19:36:36 <bootc> It would be great to work out (with jimi and luca?) how to get it handed over and progressed, then.
19:36:39 <waldi> kuLa: for google we need to decide what we exactly want. the doing should be easy as soon as we figured this ou
19:36:42 <kuLa> noahm: CDN on AWS or GCP? if on AWS jcristau is your man as he set it up
19:36:55 <noahm> kuLa: AWS specifically
19:37:14 <waldi> noahm: the global loadbalancer stuff?
19:37:26 <serpent_> I guess we agreed to have mirrors on all 3 providers?
19:37:40 <serpent_> Or are you talking about something else with CDN?
19:37:41 <bootc> It would be great to get the config into CloudFormation if it's not already, etc... That's something I can help with (CFN and AWS in particular)
19:37:41 <noahm> waldi: the cloudfront component of it, anyway, if not the global LB
19:38:09 <waldi> noahm: cloudfront is nasty stuff. i myself wanted to look at the aws global lb stuff
19:38:23 <noahm> we are straying from the current topic. :)
19:38:24 <waldi> serpent_: yes, we did, sort of
19:38:35 <serpent_> If I remember existing solutions, it was mostly Terraform and Ansible. But no more details
19:38:42 <noahm> maybe a new one: cloud provider package distribution network?
19:38:49 <kuLa> I think from GCP we need our own org so we can control access and possibly do SAML
19:39:05 <waldi> kuLa: we have SPI's org, this works
19:39:19 <waldi> debian salsa is using it a lot
19:39:20 <kuLa> serpent_: yes it was TF as I wrote it and handed to DSA
19:39:23 <serpent_> We might need separate email thoug
19:40:04 <serpent_> I haven't seen any commits to those repos recently, so it's still TF :-)
19:40:48 <kuLa> waldi: is G taking care of the bill for this if so and if delegates can be owner I think we're ok
19:41:37 <waldi> kuLa: in the salsa case, G is sponsoring "money" on a billing account we, the salsa admins, have access
19:42:03 <kuLa> utkarsh2102[m]: do you have any code you'd like to be merged to any of the team repos?
19:42:38 <kuLa> waldi: sure but is it via credits or via billing account belonging to G?
19:42:47 <waldi> via credits
19:43:22 <serpent_> So what's the issue/situation with G account right now?
19:43:48 <serpent_> AWS account belonged to jeb, Azure to Credativ - that's why we had to get new ones
19:44:36 <rvandegrift> IIRC G owns the debian-images project where they're published from
19:45:40 <kuLa> I'm not sure how Hydroxide and zack envisiged this but I think if we got org with billing acc associated with G for picking up bills it'd be great if not we need to set up billing acc as it's done for salsa and chase G for credits now and then which looks suboptimal for me
19:47:09 <kanashiro> didn't we discuss about how to create those accounts during the last sprint?
19:47:27 <serpent_> Yes, but somehow it got lost in the works
19:47:46 <kanashiro> so we have a plan, we need to implement it
19:48:07 <serpent_> AFAIR ball was on side or cloud providers (as now is with AWS) or SPI
19:48:28 <serpent_> But both people closest to SPI (Jimmy and Luca) are quite busy right now
19:49:45 <utkarsh2102[m]> kuLa: I was trying to update a couple of issues under team repo, but couldn't. Had packaged the dependencies for the cloud-finder recently.
19:50:26 <utkarsh2102[m]> But if waldi wants to see some code contributions first, then that is fine as well.
19:51:49 <Mrfai> Is it really needed to contribute before becoming a member?
19:52:36 <serpent_> Are we finished with accounts? We don't have any closure, but I guess we can continue on ML, during next meeting, or during Sprint
19:52:50 <waldi> yes, please
19:54:02 <kuLa> I was trying to find action plan for GCP in the notes from 2018 sprint but looks like there is nothing special there
19:55:27 <waldi> do we have more topics?
19:55:28 <serpent_> We didn't have anything specific for account
19:55:32 <noahm> waldi: the images being generated for publication by CI on salsa are based on the master branch, right?
19:56:03 <waldi> noahm: yes. and as long as we don't have breaking changes i would like to continue this
19:56:11 <serpent_> more about user integration and so on, but it depends on accounts under our (Debian/SPI) control
19:56:16 <noahm> I think we should branch.
19:56:21 <noahm> It'd be best if dev work doesn't risk breaking stable images.
19:56:44 <noahm> E.g. if we want to introduce changes to the list of installed packages, that'll come as a surprise to stable users.
19:57:24 <waldi> anyway, i have some points
19:57:51 <kuLa> branch per Debian release with updates going on top of it?
19:58:17 <noahm> We should have a "stable" branch that only gets targeted changes.
19:58:34 <noahm> While master is "unstable", and can get breaking changes
19:58:35 <waldi> no, it should never be called "stable"
19:59:27 <waldi> first of my topics is: mirror names
19:59:31 <waldi> #topic mirror names
20:00:03 <waldi> noone took up that task to define names for mirrors?
20:00:35 <kuLa> waldi, noahm DEP14 for branch names :-)
20:01:53 <serpent_> kuLa: this might be quite good idea
20:02:11 <rvandegrift> waldi: what names need to be discussed?  I'm just not aware I think
20:02:16 <noahm> waldi: yes, of course it should be literally "stable"
20:02:29 <noahm> but I do think we want a branch for buster, now that it is stable.
20:02:49 <noahm> FWIW, the stretch AMIs have been built from a dedicated branch for a long time.
20:03:02 <kuLa> waldi: do we have to be creative with mirror names? why not something like [provider]-mirror[counter].debian.org
20:03:05 <noahm> Which has helped keep things consistent as master has churned
20:03:22 <kanashiro> +1 for using DEP14 nomenclature for branches
20:04:24 <waldi> kuLa: it needs to be discussed and implemented with DSA, that's the more important point
20:05:05 <waldi> kuLa: provider.cloud.mirrors.debian.org
20:05:12 <kuLa> I think this is a delegate or delegate of the delegate task :-)
20:05:13 <serpent_> #idea naming of branches in our images' repo
20:05:28 <serpent_> #idea naming of in-cloud mirrors
20:06:51 <serpent_> Any more topics, or are we getting a bit tired?
20:07:06 <waldi> #topic anything else
20:07:10 <serpent_> BTW - I updated our sprint's wiki page
20:07:11 <kuLa> #info mirror naming convention could be provider.cloud.mirrors.debian.org
20:07:21 <waldi> yes, i have another one
20:07:39 <serpent_> Please put attendance and workitems proposals there
20:08:15 <kanashiro> a random thing: we might want to prepare an agenda somewhere before the meetings
20:08:23 <waldi> i would like to move the stuff with direct access to infrastructure, daily images, release images, cleanup, into it's own group
20:08:40 <serpent_> You mean in Salsa, or somewhere else?
20:08:48 <waldi> on salsa
20:09:00 <noahm> So only limited members of the cloud team can actually publish images?
20:09:50 <serpent_> We already patially discussed it during last spring (when we were talking about user accounts on cloud providers
20:10:20 <serpent_> And during Salsa user cleanup (i.e. demoting some users in our project)
20:10:35 <serpent_> So should we have separate project?
20:10:40 <waldi> noahm: yes, the same as now. but splitting it into it's own group makes accidents less likely
20:10:46 <waldi> serpent_: s/project/group/
20:10:47 <serpent_> e.g. debian-cloud-puslishing?
20:10:58 <noahm> yeah. It should be harder to publish an image than to make a simple contribution to the debian-cloud-images repo.
20:11:05 <noahm> So, +1 to that idea.
20:11:12 <rvandegrift> yea, also agreed
20:11:19 <serpent_> Agreed - just like access to Casulana is also limited
20:11:20 <noahm> But... Who gets access to the big "go" button?
20:11:23 <kuLa> do we really need it? separation on the CI stages I think is better option to not alienate contributors
20:11:48 <noahm> I don't think this is alienating anybody.
20:12:04 <noahm> I actually think it makes it easier to become a contributor, since the bar doesn't need to be quite as high.
20:12:04 <serpent_> I guess this depends on how we configure pipeline and integration
20:12:23 <noahm> There's no risk that you'll accidentally publish something to all the clouds.
20:12:32 <serpent_> But as we're now discussing our workflows - this is not a bad idea
20:12:46 <kuLa> I'm not opposing the idea just asking TBH
20:13:12 <waldi> noahm: it is more about accidently freeing the access information stored within the projects
20:13:29 <serpent_> Again - we don't need to decide it right now, but something to think about
20:14:02 <serpent_> Then we'll need to discuss which projects belong to main group and which to internal one
20:14:44 <serpent_> And how to allow for contributions from non-members (e.g. changes in list of isntalled packages, default configuration, etc.)
20:15:09 <waldi> i only speak about image relates ones: a new images-daily, the existing images-release and images-housekeeping
20:15:26 <waldi> not even the existing debian-cloud-images would be affected
20:15:48 <serpent_> And maybe mirrors' infrastructure - but not sure about that
20:16:04 <waldi> yeah, when we start to use it
20:16:52 <serpent_> #idea Separate Salsa group for clould image publishing and mirrors, with more restricted membership
20:17:11 <Mrfai> #info https://wiki.debian.org/Sprints/2019/DebianCloud2019
20:17:41 <waldi> okay. anyone got something else?
20:17:45 <noahm> not me.
20:18:01 <serpent_> We still haven't talked about image finder
20:18:13 <waldi> right
20:18:22 <waldi> what do you want to talk about?
20:18:54 <kanashiro> #topic image finder
20:18:57 <waldi> while technically being backup mentor, i'm also completely lost and have no real idea about the current state as i'm lacking time
20:19:20 <noahm> Any idea when we might be able to see a working prototype/preview?
20:19:40 <serpent_> We'll need to integrate it. Arthur is GSoC student working on it, utkarsh2102 is helping with making proper Debian package from it
20:19:53 <serpent_> Is it deployed anywhere?
20:20:19 <serpent_> During BoF we had discussion that Thomas will provide VM to run it (at least in the beginning)
20:20:22 <arthurbdiniz> not yet
20:20:23 <kanashiro> arthurbdiniz presented his working at debconf, and after that he has been implementing a a feature where people will need a token to post new data
20:20:25 <serpent_> What's the status of it?
20:20:26 <waldi> not sure what we need packages for. if we are going to deploy it anywhere usable it's not going to use packages
20:21:20 <kanashiro> I will review this new feature this week and we are planning to deploy it somewhere, zigo offered a VM for us to do this
20:21:37 <serpent_> But still we'll need for some people to be familiar with that code if we're supposed to deploy it under team care
20:21:51 <zigo> o/
20:21:54 <kuLa> package in nice to have, running it is not a issue either we can run it from AWS account for example
20:21:57 <zigo> Please ping me next time ...
20:22:31 <noahm> yeah, I would think that the AWS infra account (or any other cloud provider) should give us plenty of resources for a proper deployment (ideally on more than just a single VM)
20:22:49 <noahm> ...Assuming we ever get the AWS infra account in order.
20:23:00 <kuLa> TBH I'd like to see how it's running and automate deployment, etc
20:23:07 <kuLa> noahm: lol :-)
20:23:43 <kanashiro> after that we want to see the best way to populate the image finder, if we want the image finder to parse the metadata file generated by salsa ci scripts or if we want the salsa ci scripts posting data to the image finder when new images are published
20:24:10 <kanashiro> kuLa, we have a docker image that you can run it for now
20:24:29 <serpent_> And API so we can integrate it better - and possibly push images to OpenStack providers
20:25:06 <kuLa> kanashiro: ack, I'll try to have a look at this later this week
20:26:15 <kanashiro> kuLa, the image is available in the registry we have enabled in the image-finder repo
20:26:31 <zigo> We definitively need an API there, yes.
20:26:43 <kanashiro> https://salsa.debian.org/cloud-team/image-finder/container_registry
20:26:49 <kuLa> I'm not sure if we should rely on 3rd party systems like salsa for populating its db
20:27:42 <utkarsh2102[m]> kanashiro: is that something like docker registry?
20:27:57 <kanashiro> utkarsh2102[m], yep
20:28:05 <arthurbdiniz> we can integrate with one stage on the debian cloud image CI calling the API
20:28:06 <zigo> arthurbdiniz: Should we use the master branch now, for the image finder?
20:28:09 <utkarsh2102[m]> Coolio.
20:28:28 <arthurbdiniz> the master branch is the stable content
20:28:39 <arthurbdiniz> but the app there still not ready
20:28:56 <zigo> arthurbdiniz: Let me ask a better way: against what branch should I send pull requests?
20:29:10 <arthurbdiniz> development
20:29:13 <zigo> Ok.
20:29:14 <kanashiro> there are some pending MRs that I need to review
20:29:36 <arthurbdiniz> 9 MRs to review
20:30:17 <arthurbdiniz> @zigo i create a branch for you https://salsa.debian.org/cloud-team/image-finder/merge_requests/34 to work on PBR
20:30:20 <kanashiro> zigo, arthurbdiniz has documented the project's branch policy here: https://salsa.debian.org/cloud-team/image-finder/wikis/Branch-Policy
20:30:45 <kuLa> wow a big one registry.salsa.debian.org/cloud-team/image-finder                            latest              a5855a6b66ee        2 weeks ago         524MB
20:30:50 <zigo> arthurbdiniz: I'll redo it.
20:31:36 <zigo> IMO, this branch stuff is a bit overkill, especially for something not yet in production... :P
20:31:57 <kanashiro> kuLa, there is room for many improvements :)
20:33:36 <serpent_> So, any actions here?
20:33:45 <kuLa> kanashiro: :-) ack
20:34:02 <kanashiro> I hope arthurbdiniz will keep working on image finder after gsoc :)
20:34:15 <arthurbdiniz> kanashiro sure i will
20:34:21 <kuLa> serpent_: I'd say we need to figure out where are we going to host it
20:35:02 <kuLa> and if cloud team is going to maintain it or DSA if the latter we need to work with them on infra
20:35:05 <waldi> kuLa: we want .debian.org, at least in the long term
20:35:12 <serpent_> AFAIR for now Infomaniak (zigo) will provide VM. Then, after we have some experience, we can move it somewhere
20:35:32 <kanashiro> I am also panning to request a .debian.net domain for the image finder
20:35:46 <serpent_> waldi: agreed, but first we should have something running - and here debian.net sounds good
20:36:09 <kuLa> +1 to that
20:36:21 <zigo> The debian.net subdomain is easy to do.
20:36:28 <zigo> The VM @infomaniak too ...
20:36:33 <noahm> We should try to get it up and running asap so we can poke at the UI, etc, and give feedback
20:36:42 <kuLa> debian.org are usually under DSA and this will take time and I'm not sure if we should dump this on them
20:36:53 <serpent_> +1 to running it ASAP
20:36:53 <zigo> Both done in less than 1 minutes with a command line ... :P
20:36:54 <kuLa> noahm: +1
20:37:23 <noahm> It'll also be good to be able to work out how it fits into the publication workflow.
20:37:42 <zigo> Though I want it packaged, together with a puppet or ansible script to set it up.
20:37:43 <kanashiro> I have never created a subdomain, so it will be more than 1 min for me :-)
20:37:52 <kanashiro> but we will do it asap
20:38:15 <zigo> Well, 1/ pbr stuff 2/ debian package 3/ puppet thing 4/ hosting.
20:38:22 <zigo> It *must* be in this order.
20:38:48 * kuLa need to drop off so cu later guys
20:38:57 <serpent_> see you
20:38:58 <kanashiro> can't we use the docker image for testing purpose?
20:39:38 <utkarsh2102[m]> I'm sure we can test the initial set of functionality using them.
20:39:51 <noahm> we don't need a .deb for hosting, either for testing or for production.
20:39:59 <noahm> Plenty of what DSA runs today is not packaged.
20:40:10 <kanashiro> I think you are imposing many constraints with this "must"
20:40:25 <zigo> noahm: Sure, it can be messy, though we also can do things right! :P
20:40:43 <zigo> It's not as if all of that was hard to do.
20:40:54 <waldi> so we have anything left to discuss?
20:41:19 <kanashiro> I think we are done
20:41:25 <waldi> good
20:41:29 <waldi> #topic Anything else
20:41:36 <noahm> Should we do this again next month?
20:41:37 <waldi> again: anyone got something else?
20:41:42 <arthurbdiniz> serpent_ and waldi can we talk about debian docker images?
20:41:47 <serpent_> Nothing for now
20:41:52 <waldi> arthurbdiniz: no, why?
20:41:57 <noahm> arthurbdiniz: the right people aren't here.
20:42:10 <serpent_> Docker - what do you mean? That we should prepare them?
20:42:25 <noahm> the cloud team has not historically been involved in the creation of the public docker images.
20:42:39 <arthurbdiniz> to find the dd that builds the docker images that we talked on the BoF
20:42:41 <serpent_> Maybe let's add this to sprnt agenda, and (maybe) talk about it during next meeting
20:42:57 <kanashiro> we should have this monthly meeting with a defined agenda
20:43:05 <serpent_> Yes.
20:43:08 <Mrfai> ack
20:43:18 <serpent_> Any proposals for next month meeting?
20:43:21 <waldi> #action discuss docker images
20:43:27 <waldi> #topic Next meeting
20:43:38 <zigo> arthurbdiniz: Did you write a wsgi entroy point for your app?
20:43:39 <waldi> lets just use the same date and time
20:43:48 <kanashiro> agreed
20:43:56 <noahm> 14th, 1900 UTC?
20:43:57 <serpent_> So second Wednesday of month?
20:44:05 <arthurbdiniz> zigo not yet
20:44:11 <waldi> 11th of september
20:44:17 <serpent_> 14h of september will be Saturday
20:44:35 <noahm> 11th, 1900 UTC, then.
20:44:43 <serpent_> any objections to 9/11 from USA team members?
20:44:44 <zigo> We can just settle on the 2nd wednesday of each month, if that's fine for everyone.
20:44:59 <Mrfai> ack
20:45:20 <kanashiro> and we should have a wiki page (or something else) where people can add topics before the meeting
20:45:34 <serpent_> Mostly fine for me, might be harder at the end of the year, but not sure yet
20:45:54 <zigo> kanashiro: Gobby is usually the Debian way... :P
20:46:07 <serpent_> #action wiki page with propsed topics of next IRC meeting
20:46:26 <kanashiro> zigo, that also works, some other teams use wiki pages
20:46:34 <zigo> I'm fine with both.
20:46:44 <noahm> either one is fine. let's just add it to the topic of this channel so it's easy to find.
20:46:49 <serpent_> OK, let's wrap this up
20:46:59 <waldi> #agreed next meeting will be 2019-09-11T19:00Z
20:47:15 <serpent_> I'll summarize this meeting - but it might take few days
20:47:22 <waldi> #endmeeting