19:00:06 #startmeeting 19:00:06 Meeting started Wed Aug 12 19:00:06 2020 UTC. The chair is rvandegrift. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:00:12 * noahm is here 19:00:25 here 19:00:38 hey guys 19:01:06 hi 19:01:36 I'm having some bigger problems woth my main sever, so let's start 19:01:47 so I have time to fix that later 19:01:48 #topic shrinking the image 19:01:50 Mrfai: you wanted to discuss some items related to shrinking the image 19:02:04 yes. 19:02:46 Ass I wrote, we could prove raw images for openstack by (manually) extract them on petterson from our tar and create the sha sum 19:03:16 Does this make sense as a first step? 19:04:25 I'd really rather not rely on anything manual. 19:05:22 the upload requires an ssh key, could the pipeline use that to run a script remotely? 19:05:59 the pipeline already runs stuff?!? 19:06:37 does the missing sparse file support cause a problem there? maybe I misunderstood your email 19:07:53 I also did not understand the problem with sparse. IMO we wnat to copy the whole 2GB image, because we need this 2GB raw for openstack. 19:08:26 Mrfai: öhm, no? you only need to write the parts that are not empty 19:08:46 rvandegrift: we have all the bulding blocks in their already 19:09:31 having the uilding block there does not help our users. They need the raw image. 19:09:39 on petterson 19:10:50 waldi: just so I understand, you'd rather uncompress & upload the raw image from the pipeline? If so, I think that sounds fine 19:11:17 everything should happen in the pipeline, one way or another. That's why we have it. 19:11:41 What are the actual technical blockers preventing us from doing this in the pipeline? 19:11:54 Is it just a matter of somebody needing to implement it, or are there unsolved problems? 19:13:05 Mrfai: one of the problems with this is: _two_ people claim they need it. they can't show documentation from vendors that tell such. they get angry if asked for it 19:13:36 (okay, i knoe the technical background now, but still) 19:14:13 waldi: So you want to restart the discussion if we really need this? 19:14:27 Mrfai: no 19:14:47 I don't think we should do that; if we can provide images that are directly consumable, we should, even if nobody is specifically asking for it. 19:15:08 agreed 19:16:47 Back to this question from noahm "What are the actual technical blockers preventing us from doing this in the pipeline?" 19:20:06 sounds like maybe there are none 19:20:34 waldi: ? 19:20:58 there are no technical blockers, just a missing implementation 19:21:57 cool. we should describe the request clearly in an issue on salsa so somebody can pick it up and start the implementation. Unless somebody is ready to start work on it now. 19:22:59 Is the idea simply that we want raw (not compressed, not tarred) openstack images, along with checksums, available for download on cloud.debian.org? 19:23:14 yes 19:24:33 we could verify if: 1. we can use http transport compression with pre-compressed files and 2. if openstack can utilize that 19:25:41 that would certainly save disk space for us. 19:26:08 we can save a lot of disk space, if we do not keep more than 310 daily images directories 19:26:27 so clean of daily directoies is also important. 19:26:30 Mrfai: https://salsa.debian.org/cloud-team/debian-cloud-images/-/issues/29 19:27:02 I know, we have an issue, but still no solution. 19:27:02 (the implementation is lost on my disk somewhere) 19:27:32 back to the raw images. How will implement this? When? 19:28:06 My idea was to do this manually once for the release(s), then someone has time to implement it. 19:29:10 it might be good to do one manually, so openstack users could validate that it meets their needs 19:29:18 the proponent of that feature? zigo? 19:30:42 I don't think we can volunteer on his behalf. And without a volunteer, I don't think we should do a one-off manual implementation of this task. 19:31:17 It might be different if we knew that we'd have automation built in time for the next buster point release, but we don't know that. 19:31:52 i can look into the http compression part. but i don't know if i can leverage access to a recent enough openstack to test anything useful 19:32:37 waldi: I would skip the http compession part. Keep it simple. 19:33:12 Agreed; it's a premature optimization. We will want it, but we don't strictly need it at first. 19:35:29 Is there anyone else except waldi who can implement this? I'm no python guy, and I stil do not understand all the bit that happen in the builkd script and the pipeline and how access to petterson is done. 19:36:04 I'm sure several of us could. Possibly zigo; I'm sure I could brute-force it. 19:36:05 Is it possible to call a script from the pipeline? 19:36:56 You don't want to call a script from the pipeline. The pipeline is already executing code; you want to extend the existing code. 19:37:45 maybe - the upload uses the json manifest. if we'd need to extend/modify that format, it might be easier to add a new job that just does uncompress & upload 19:38:00 I haven't looked at it in detail though, so it could be that there's no worry here 19:40:07 OK, if someone implements this there's no need for a one-shot manual task. But how long should we wait to get this implelented. The same applies to the cleanup dailys issue issues/29. 19:44:03 we should wait until someone with sufficient skill and motivation does the implementation. 19:45:27 then it may need some more month until we have raw image and until the dailys get a cleanup. That's why I volunteer for doing these tasks manually until we have an implenentation 19:45:53 I really fear that nothing will happen for a longer time 19:46:54 could be; but if you do it by hand, then we risk it being inconstent over time. And whoever implements this automation is now bound by whatever you did by hand, otherwise we risk breaking (or at least surprising) our users. 19:49:10 we only have to define the raw image file name IMO. Anything else? the I would extrace the disk.raw from the existing tar, rename it and appen its shaxxxum to the shaxxx file 19:50:02 I would want to know the plan would work before spending time on automation - we could do it on any webserver to avoid creating a pattern for users on cdimage.debian.org. Mrfai: do you have a webserver with enough space/bandwidth? I'd be happy to set something up and stage the files for an operator to test 19:53:15 I do not understand. Why do you need a webserver? Which files do you want to stage there? 19:53:49 do we have other things to discuss? 19:55:26 how much dailys do you like to keep? 19:55:37 around a month 19:55:56 ok. Time base or just the n builds? 19:55:57 IMO we should keep as many as we can without it becoming a problem. 19:56:28 or the last n build? 19:57:23 does not really matter. azure does time based 19:57:36 Mrfai: I'll follow up on the mailing list later tonight - maybe my plan is not useful for openstack operators 19:57:58 noahm: IMO it becomes a problem for the users. Having so much versions is confusing. Also the the image finder. 19:58:21 the image finder only needs to handle the latest version 19:59:09 does anyone have other topics we should discuss now? 19:59:26 no 19:59:27 yes 19:59:48 go ahead waldi 20:00:34 continuous maintenance of cloud-init and google-compute-bla 20:01:03 i wanted to do changes, but i now refuse to handle vcs-in-vcs packages 20:01:29 and the ones opting in not really do anything, esp for the google stuff 20:02:31 #topic packaging work 20:02:56 vcs-in-vcs is still by far the most common approach to packaging in Debian 20:03:21 i know. that's why there have been a lot of very heated discussions over the last year 20:04:05 I'm not sure we're going to avoid having another if we bring the matter up here. 20:05:33 if we're the only maintainers and we all agreed we could convert them - but I don't know if we'll all agree 20:05:56 okay, i'm biased, as i just implemented hopefully proper git handling as another dpkg source format 20:06:47 waldi: I've tried your format. I agree that it's nice for working on the package. I'm not sure I agree that it's nice for contributing back to upstream. 20:07:38 noahm: could you elaborate? the vcs-in-vcs approach can't contribute anything back 20:09:03 In the vcs-in-vcs approach with 3.0 (quilt), all Debian changes are effectively rebased on top of the orig code, so they're always in a condition where they can be applied upstream. But in your format (whose name escapes me at the moment) the changes are interleaved with upstream changes. It's difficult to extract a discrete change for a single upstream submission. 20:10:33 unless I am missing something. Maybe if each Debian change is maintained on a change-specific branch, then a 'git diff' can produce a patch that can be sent upstream. 20:14:07 I have to head off in a few minutes, so I'm going to end the meeting, but I don't mean to cut off that conversation 20:15:01 if we don't have anything else to discuss? feel free 20:16:28 #endmeeting