17:59:30 #startmeeting 17:59:30 Meeting started Tue Sep 22 17:59:30 2020 UTC. The chair is pollo. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:59:30 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:59:34 #topic Roll Call 17:59:49 o/ 17:59:50 Please say hi if you're here for the meeting 17:59:52 o/ 18:00:08 #link Agenda: http://deb.li/oNCD 18:00:18 paddatrapper sent regrets 18:02:05 #topic DC20 Teardown status -- Status of final video publication 18:02:17 hi if you're here for the meeting 18:02:24 I think we're ready to shut down the review host now 18:02:34 all the data is copied to vittoria, and I spent today releasing what I could 18:02:39 C&W is done, and a few others 18:02:48 what remains? 18:02:53 Vagrant's talk has issues, I'm going to do that one the same way as the malayalam one 18:02:56 same for Daniel's talk 18:03:21 I see some videos haven't been re-encoded since august, is that expected? 18:03:41 all other talks are listed as "done", except for the leadership one (which is manually modified and I shouldn't touch that anymore), and the ones that are ignored (because they were cancelled) 18:03:46 #info "Free as in Sunshine" and "How is to starting packaging..." still have issues and haven't been published. 18:04:20 thanks, that's helpful to know 18:04:22 tumbleweed: I don't remember the exact cutoff date, but I had to restart certain things at a point 18:04:22 what are the issues? and what is the plan there? 18:04:52 tumbleweed: sreview-cut fucks them up, I'm going to write a five-line script like I did for the malayalam one to replace sreview-cut for those talks 18:04:53 I see several batches. August 27-31, sep 2-4, 6-7, 16-18, 20-22 18:05:13 that sounds reasonably correct 18:05:41 the 27th may indeed be the date when I restarted everything after playing with the fonts 18:05:44 OK, so aside from those 2, not expecting any more changes, we can start releasing and doing final review 18:05:52 (which was required for malayalam) 18:06:00 yup 18:06:10 and I do want to apologise for doing a terrible job 18:06:11 how should final review be coordinated? 18:06:34 I messed up a few times which, as you described, pulled the rug from under you 18:06:56 don't beat yourself up about that - the future matters more than the past 18:06:57 I'm not going to handle uploading the same way I did this time around anymore for future events 18:07:09 pollo: probably a spreadsheet again? 18:07:13 I know, but I do feel an apology is necessary :) 18:07:24 do we have one up already? 18:07:36 we have the old one, but we probably want to start from scratch 18:07:45 yeah, probably a good idea indeed 18:07:47 and probably with a list of all expected talks 18:08:03 wouter: sounds like you've at least figured out what to do for next time... thanks! 18:08:07 #action pollo to make an ethercalc for the final review 18:08:19 olasd: what was the process the last time around? 18:08:24 pollo: do you want me to run a SELECT to get you the list of filenames you should find? 18:08:28 it didn't look like you were using the same spreadsheet (my old reviews were still there) 18:08:35 I have them via my etherpad scraper script 18:08:45 ok 18:08:51 from the website, so we'll be sure not to miss any 18:08:58 tumbleweed: I did not go further than looking at your spreadsheet 18:09:01 OK 18:09:03 yeah, makes sense 18:09:22 wouter: in the future, it would be nice if this final review could be a step in Sreview 18:09:49 not that this is urgent, just a suggestion 18:10:09 pollo: my next priority for SReview is to create an admin interface 18:10:23 pollo: once that exists, there will be an option to set arbitrary flags on each talk 18:10:32 you would then be able to use that 18:10:32 nice! 18:10:46 anything else for this topic? 18:11:24 #topic DC20 Teardown status -- Uploads to Youtube and Peertube 18:11:30 wouter: do you think that's possible by the minidebconfs? 18:11:37 highvoltage: no 18:11:42 wouter: kk thanks 18:11:48 highvoltage: I'm not even sure it'll be possible by the next debconf :) 18:11:56 tumbleweed: have you uploaded things already, and if so, how does replacing videos work? 18:12:04 (admin interface will be a *lot* of work...) 18:12:12 pollo: I uploaded things 2-3 re-encodes ago 18:12:20 so... need to delete that and do them again 18:12:23 are those videos public? 18:12:28 no 18:12:36 becuase we very quickly found problems when reviewing them 18:12:45 thus the re-encodes 18:12:59 #info somes videos were uploaded a while ago, but will need to be re-uploaded. They were not public 18:13:16 I guess this can start once we're done with final review 18:13:26 yeah, I agree 18:13:26 they can go hand in hand 18:13:37 I was doing my final review on the youtube videos, for example 18:13:40 (couple of birds, one stone) 18:13:56 heh :) 18:14:15 I'll be sure to add a "YT" and "PT" entry to the ethercalc so we can track what's been published 18:14:36 do we want to keep the review host up until final review is done? 18:14:51 given the conference is over, I'll probably do a slow drip youtube publish, of 1 per day or something like that 18:15:02 like I've been doing for old events 18:15:10 wouter: I don't think it's something we can reasonnable ask Infomaniak 18:15:17 yeah, I think that's a good idea 18:15:31 pollo: mm, point 18:15:43 if work needs to be done, we should be using vittoria 18:15:55 which probably means re-sync the data and db to vittoria? 18:16:42 yeah, I'll do that after the meeting 18:17:01 #action wouter to resync sreview to vittoria 18:17:07 anything else? 18:17:24 I assume everything else is fully torn down? 18:17:37 machines have been taken down afaik 18:17:44 I think shut down but not deleted yet? 18:17:48 ICBW though 18:18:03 I went through the issue list and archived the things we had to 18:18:10 thanks for that pollo 18:18:18 we lost metrics, as the default time period was 15 days 18:18:43 have we fixed ansible so that doesn't happen anymore next time? 18:18:49 there is an issue 18:18:53 that works too 18:19:05 as long as we don't forget somehow 18:19:10 it's a time:space tradeoff 18:19:17 14 days sounds totally reasonable for just about any debian event 18:19:28 sure, I know 18:19:38 well, I'd actually think a month or thereabouts is somewhat better 18:19:54 yeah, and the disk usage would be fine 18:19:56 but meh, never mind 18:20:29 #topic MiniDebConfs -- MiniDebConfs schedule 18:20:46 #link http://deb.li/i4Y5M 18:21:04 #info expected schedule of miniconfs is still 3 weekends, 21-22 november, 28-29 november and 5-6 december 18:21:21 that's three weekends back to back, do I see that right? 18:21:28 yes 18:21:38 and it's coming fast 18:21:44 so that's essentially more intense than dc20 18:21:52 intense in a different way 18:21:59 I share olasd's fears on burnout and not having enough core team members 18:22:09 I think we should ask them to reschedule 18:22:24 i think that may have been intentional on highvoltage's part 18:22:24 1 in Nov, 1 in jan and 1 in feb 18:22:35 I don't think I can sell being busy for 3 weekends to Tammy, e.g. 18:22:43 +1 18:22:44 nattie: not really, it just so happens that the good dates lined up like that 18:22:52 ok 18:23:04 wouter: what needs you at all 3 of those? 18:23:08 I certainly won't be up to more than 1 miniconf before the end of year 18:23:15 highvoltage: sreview needs care and feeding 18:23:28 do we really need it? 18:23:31 yes 18:23:33 we need something to do that 18:23:46 it doesn't have to be sreview 18:23:52 if we don't use sreview, someone needs to take care of setting veyepar 18:23:55 but I don't think the complaints around dc20 are entirely sreview's fault 18:23:55 but that's also work 18:24:25 I'm not saying I definitely won't be there for three weekends (I haven't talked to her yet, anyway), but I do think 3 weekends back to back is pushing it and we should at least explore the possibility of spreading it out a bit more 18:24:35 I don't like having to depend on something that's fragile and needs lots of handholding 18:24:41 whether there's someone to do that or not 18:24:46 highvoltage: everything we use is 18:24:54 highvoltage: it was fragile because I hacked things up in ways it was never designed to 18:25:07 sreview was a lot more fragile than usually, this year 18:25:11 veyepar isn't ready to do an online conf either and would need hacking too 18:25:14 highvoltage: next time I'm not going to use the original uploads as injected recordings, I'll just depend on the stream, period 18:25:21 tumbleweed: I don't think that's true, but if it takes a lot of explanation I'll be willing to listen after the meeting 18:25:37 that then reverts SReview to the way it was designed to work, and all the problems should go away 18:25:37 this may be an easier discussion to have in a video conference? 18:25:49 it seems like we do need to make a couple of decisions: 18:26:02 1. can we support these miniconfs so close to each other (up to now, we've been saying we can) 18:26:19 2. what would the prerecorded talk pipeline be 18:26:25 3. what to use for review 18:26:36 4. how to get a bus factor > 1 on running that stack 18:26:55 tumbleweed: 2. and 3. are fairly simple -- the upload pipeline worked reasonably okay I think, so we can reuse the stuff I used there 18:27:19 3. we can use SReview, *but* I'm not going to have separate review for Q&A like I did for dc20 18:27:26 that turned out to be a bad idea and I'm not doing it anymore 18:27:28 highvoltage: do you think asking the miniconfs to reschedule would be problematic? 18:27:41 instead we'll cut the talk as it was livestreamed with the Q&A following afterwards 18:27:44 pollo: nothing has been announced yet, so yes, can do 18:28:03 wouter: so technically, that sounds reasonable. But from a distance it's hard to really say, because we weren't all in the weeds with you on all of the problems 18:28:04 although we'd like to announce the gamind mdco soon, within the next week and also a CfP 18:28:30 tumbleweed: would you say that SReview during debconfs before dc20 was reliable? 18:28:34 wouter: I'm a little more concerned at the meta-issue here, which is that we were depending solely on you, and you were overcommitted 18:28:51 also, if everything is really as broken as everyone says then that just motivates me to replace everything with OBS 18:28:52 yes, I would appreciate it if someone would be willing to learn SReview a bit more 18:28:56 I'd say it's been fine. But there have been problems, and we've been almost entirely dependant on you when there have been 18:28:58 highvoltage: would you be OK with talking to the other 2 events to see what can be done? 18:29:07 ivodd understands it a bit, but that's it 18:29:22 pollo: yep, will do 18:29:24 I don't think december is a good month to hold a miniconf, or at least, it's a terrible month for me 18:29:40 I think just moving the center one to some time in January might be enough, honestly 18:29:41 #action highvoltage to talk to other miniconfs to see if they can be rescheduled 18:29:52 pollo: you have to realise that what's bad for you in your part of the world might be great for someone else in another part of the world 18:29:57 I think this is more than just getting someone to learn SReview more 18:30:13 that leaves the first one in mid november, the last one in early december, and one in jan 18:30:15 agreed, but that means I won't be able to help 18:30:36 one of the things we need is a team to commit to keeping the pipeline running 18:30:42 +1 18:30:45 a few of us tried to help to do that during dc20, and things just blew up 18:30:46 I'd like to work towards a stack that can be at least semi-up (or easily brought up) when needed without so much fudging 18:30:59 highvoltage: yes, but we're not there yet 18:31:07 and won't likely be before those miniconfs 18:31:15 for that we need to CI-test our ansible setup a bit more 18:31:38 from my PoV, stability of the ansible setup was not an issue 18:31:53 although i guess the sreview bits had some idempotency issues 18:32:22 tumbleweed: I'm just replying to highvoltage here -- if we test ansible enough then it doesn't need much fudging because the CI catches all the issues 18:32:49 yeah, but I think that's ignoring the deeper issues 18:32:55 that is our stack isn't built out of rock solid parts 18:33:03 mm, okay 18:33:06 it's a lot of fragile eggs held together with ducktape and string 18:33:22 afaik nginx dies every now and again but a daily restart seems to mitigate that 18:33:25 I think we're quite a long way from having a debconf-video in a box service 18:33:35 what else is problematic? I don't know if we have a list for these somewhere 18:33:46 anyway, the point I was trying to build towards is that I've been working on !306 18:34:01 and it seems to work fine, and I think it's something we should do on a regular basis 18:34:13 that also allows you to replicate the setup on a local laptop, so you can experiment a bit easier 18:34:43 (!306 -- adds vagrant setup so you can "vagrant up" a full stack, and if we do that in CI then we're sure things work) 18:34:49 I don't think ansible and CI is the main problem, more like services being unstable 18:35:08 I wouldn't trust etherpad to keep working like a charm, although it was ok at DC20 18:35:09 it's more than just instability 18:35:35 it's also the manual config around ansible 18:35:40 that rquires you to know how things are working together 18:35:59 the salsa auth things have not been documented and it would be nice to :) 18:36:01 it needs people who understand this stuff, to run it 18:36:55 sure, but I can't see any way to fix all that other than "let's make it easy to run the setup on a laptop so you can debug things", which is what I am trying to do with that MR 18:37:09 yeah, that's the logical path forward 18:37:27 currently I can only work on the SReview ansible stuff when debconf is near 18:37:49 but that's actually the worst time to work on it, because I then need to focus on making it work for that debconf, rather than fudging with ansible yaml files 18:38:46 presumably we can organise VMs to do development in, if that's useful 18:39:01 so, not sure if this is the next item. But what's the infra plan for these miniconfs? 18:39:09 that's the next topic :) 18:39:16 that talks about permenant infra, though 18:39:18 (speaking of which...) 18:39:44 should we move on to it? 18:39:50 #topic MiniDebConfs -- infra for miniconfs 18:40:17 the idea that was pushed around last meeting was to use Hetzer boxes 18:40:32 I think if we can, we should try to have the infra close to the miniconf location 18:40:45 I'm getting debian.ch to do some setup that would make that easy whenver needed 18:40:50 so not long-running infra across all 3 events 18:40:55 well, the games miniconf isn't exactly localized 18:41:03 that one can be in EU 18:41:04 I remember someone being tasked to ask infomaniak again too but can't remember the exact details 18:41:44 tumbleweed: I guess it all depends on how complex our setup is 18:41:47 nothing in last week's minutes 18:41:56 do we want to offer prerecorded ingest? 18:42:04 etherpad? 18:42:08 et.c 18:42:38 the more complex, the harder setting up will be 18:42:40 so, I'd assume that is all ansibled 18:42:51 not sure how much manual setup is in the sreview stack 18:43:17 not *that* much 18:43:45 and now that I found a way to easily create and destroy VMs, I plan to remove all the manual bits 18:43:58 also, for miniconfs I think sticking to vittoria for SReview is probably a better idea 18:44:08 actually even for full debconfs 18:44:39 because we want to be able to fix up things later if necessary without having to translate database primary keys 18:45:46 highvoltage: presumably the next most manual bit of setup was OBS 18:45:47 long running infra also means we'll have to pay for the boxes for a few months, especially if the miniconfs are recheduled 18:46:03 oh yeah that was very manual and experimental 18:46:08 we can have a regular loop for miniconfs too 18:46:27 will become significantly better with every event 18:46:50 in my book, for the upcoming miniconfs, we should try to have a minimal setup 18:46:59 pollo: +1 18:47:16 I'd like to do away with that cumbursome sequencer completely and just use OBS's API for that instead, since we should be easily be able to script what we want to play next 18:47:17 it'll save energy and make things less complex and more reliable 18:47:23 start minimal, but don't stop people from having fun 18:47:26 although there isn't *that* much we can drop though 18:47:32 pollo: what does that mean though? 18:47:39 pollo: mdco#1 was super minimal 18:47:53 pollo: but if we do that again we're not going to have much opportunity to grow and improve 18:48:32 highvoltage: the main problem I'm seeing is the few people have said "yes, I'm available as a core videoteam member for the miniconfs" 18:49:13 pollo: even if someone were to say that, I don't think there's anyone among us who understands the full stack 18:49:15 if people want to experiment, fine with me, but that means they have the energy bandwidth to do so 18:49:31 which is also a problem 18:49:41 pollo: imho we should ruthlessly cut things out that just doesn't work. the core stack really imho works well enough, at least it did during mdco#1, tha it really didn't need that much videoteam intervention 18:49:47 I think we could try something without etherpad capture and no prerecords, just live talks on jitsi 18:50:34 MDCO was that 18:50:34 if people want to pre-records, they can share their screen in jitsi playing a video 18:51:10 I think one of the reasons dc20 went smoothly was the amount of prerecorded video 18:51:13 pollo: MDCO did that, but did also have a lot of issues with people messing up during live talks, or with connections dropping etc 18:51:21 I'd rather play the video for them from OBS in that case because the frame rate of playing a video in jitsi isn't great 18:51:23 I think having prerecorded video is not a luxury for an online event 18:51:49 hmm, ok, so pre-recorded videos, no etherpad? 18:51:55 that would cut out the grabber + vnc 18:51:58 WFM 18:52:05 you can cut the grabber and vnc even without that 18:52:16 wouter: we had 3 problems in total. one was where the speaker had an 11 minute power interruption and didn't realise. the second was an old core 2 duo laptop that kept on overheating, and the 3rd was a person who really didn't have much usable technology at all and ended up dialing in 18:52:24 wouter: afaik we had 0 problems on our side that weekend 18:53:00 tumbleweed: but then you rely on people sharing the etherpad in jitsi 18:53:02 highvoltage: okay, so maybe I misremembered the numbers. Still, for two days of events, that's actually quite a bit 18:53:08 pollo: yes 18:53:21 Can I suggest a more structured approach here 18:53:59 wouter: not really 18:54:17 highvoltage: are you saying that we don't need prerecorded talks? 18:54:28 wouter: slightly better speaker preperation goes a long way in reducing that 18:54:40 highvoltage: perhaps 18:54:53 wouter: why do you say that? 18:55:06 highvoltage: "perhaps" as in, "maybe you're right" 18:55:21 pollo: back to structure... Should we list the infra we had for dc20 and +/- it? 18:55:28 wouter: do you have any reason not to believe me? I mean, I've provided all the facts 18:55:36 I think we can do that on the ML 18:56:02 #action pollo to start a discussion on the infra needed for the miniconfs on the ML 18:56:06 tumbleweed: I'm saying that afaik we had no problems with our stack that weekend, and the very few problems we had were all on the speaker side due to some preventable hardware dificulties 18:56:06 highvoltage: eh, I think you're misunderstanding me. You made me think over it a bit more. I had a feeling, but perhaps I was wrong 18:56:38 if that's ok with people, I'd like to move to the next topic 18:57:12 highvoltage: yep, that's my memory of mdco too 18:57:12 +1 18:57:17 #topic Setup of permanent infra for testing, training and miniconfs 18:57:37 tumbleweed: imho being able to do both pre-recorded and live would be nice (although not essential). many speakers dread pre-recording their videos because it's hard to avoid the temptation to re-take and try to get everything right. so it's nice to prefer prerecorded but to provide some choice 18:57:53 highvoltage: +1 18:58:19 I think there's also varynig degrees of pre-recording support 18:58:29 if people have the energy to tackle a permanent setup this quarter, I'm all for it 18:58:39 i.e. how much review the content & video teams do of incoming videos 18:58:42 I would say that a voctoweb would be super useful again 18:59:01 but I think making it a goal for a sprint during the winter 18:59:07 so, what would a minimal setup be? voctomix + streaming ? 18:59:10 would be reasonnable 18:59:28 plus review (but we already have that on vittoria) 18:59:36 yeah 18:59:48 voctomix as in, the gtk app? 19:00:03 voctoweb, presumably 19:00:12 but that's just a frontend for voctocore 19:00:13 highvoltage: I think as in voctocore ;) 19:00:37 yeah just asking since 'voctomix' is kind of specific 19:00:46 it's the general name for the voctomix stack 19:00:51 ok 19:00:52 as opposed to voctocore / voctogui 19:01:17 so... just voctomix + streaming? 19:01:22 for mdco#1 we didn't even have vocto, it was just jitsi -> stream 19:01:30 ah, and jitsi presumably 19:01:39 highvoltage: yeah, and we thought that we needed the vocto 19:02:03 jitsi can be handled on social.debian.org no? 19:02:03 jitsi isn't really built for that kind of use 19:02:04 highvoltage: would we need a separate jitsi from the jitsi.debian.social instance? 19:02:14 yep, I'm just again reminding what minimal stack can work for an MDCO if needed 19:02:15 pollo: it could. But we need a jibri 19:02:40 anyway, again, I feel this should be disscussed on the ML 19:03:00 no, I disagree 19:03:21 I'm about 87% confident that it's going to work out replacing jibri with OBS for MDCO#2 19:03:31 basically if we use jitsi.debian.social, then the only "extra" permanent infra we'd need is a voctocore box, a voctoweb instance somewhere, and a jibri or some such? 19:03:44 highvoltage: I still don't really see the point of that, but wahtever :) 19:04:08 wouter: and streaming 19:04:13 ah yes 19:04:33 imo that's also reducing the bus factor on stream capture, as a few of us are familiar with jibri already 19:04:35 tumbleweed: would it be feasible to run the voctoweb and voctocore on the same box? 19:04:35 also, probably ideally some monitoring. But that could be at debian.social level if it had a prometheus / something 19:04:44 wouter: that's what we did at dc20, so yes 19:04:48 ah, heh :) 19:04:51 cool 19:05:09 tumbleweed: yeah not yet 19:05:58 so agreed we'd be looking at: 1 VM for voctocore+voctoweb, 1 VM jibri, 1 VM monitoring ? 19:06:05 and streaming 19:06:13 I guess 1 VM for streaming? 19:06:32 the voctocore + voctoweb VM will need to be reasonably beefy 19:06:38 everything else, less so 19:06:43 1 VM for voctocore+voctoweb, 1 VM jibri, 1 VM monitoring, 1 VM for streaming backend + DO droplets for streaming frontend 19:07:02 the streaming backend and frontend can be on the same machine 19:07:16 and you don't need a long-lived global network 19:07:21 ah, true 19:07:27 although they're cheap enough to not be a big deal 19:07:55 generally our stack is geared towards short-term events 19:08:00 it might be nice to have some way of automatically adding DO droplets based on load 19:08:07 but I don't know how difficult that would be 19:08:12 so we'll have to figure out some strategies to not consume a mountain of disk space 19:08:20 #agreed for a permanent setup, we'd want 1 VM for voctocore+voctoweb, 1 VM jibri, 1 VM monitoring, 1 VM for streaming 19:08:23 presumably shutting down voctomix inbetween events 19:08:33 wouter: ATM, it's fairly manual 19:08:44 yeah, I gathered that 19:08:47 (traffic needs to be sent to the new nodes, etc.) 19:08:49 I think the amount of VMs is probably fine, but I'm still going to work towards cutting out jibri 19:08:51 and the problem is people won't be redirected if they use RTMP 19:08:54 and everything needs to be in DNS, get SSL certs, etc. 19:08:59 what I'm saying is that it would be nice if we could change that 19:09:35 meh. A real CDN may be a better option there 19:09:39 or that 19:09:56 slightly on-topic, is there any objections to also streaming to youtube? 19:10:02 maybe I just have a bias against auto-scaling, but I've never dealt with problems where that complexity seems worth it 19:10:07 I agree with the brazillians that it's actually great for discoverability 19:10:19 so, we did that at DC20, but never exposed it 19:10:23 I think it's great for discoverability 19:10:36 it requires a little more manual work 19:10:57 (if the stream goes down longer than a couple of minutes, a person with admin on the youtube account needs to bring it up again) 19:11:00 and it needs someone with access to our accounty 19:11:23 I can grant access to anyone else that needs that 19:11:36 yeah pity it needs someone to have access to the youtube account for that 19:11:39 tumbleweed: ITYM if the stream goes down it is considered dead and you need to create a new one? 19:11:45 but that could be a dedicated job too 19:11:47 but then again it's been a while since I did live streaming to youtube 19:11:47 wouter: yes 19:11:58 right, so I did remember correctly :) 19:11:59 but, there's some complexity to it 19:12:03 you can keep using the same streaming key 19:12:22 oh? don't remember that 19:12:24 however, if you don't create the new stream (or have the window to do it open) before you restart sending video to youtube, you can't do it 19:12:35 so, you have to restart the streaming *again* when you're ready 19:12:48 we should try to move on, this meeting has been on for 1h15 already 19:13:12 #topic Any other business 19:13:27 I had one item but already brought it up 19:13:50 I've been working on that Vagrant setup, and want to know whether other people think it's useful before I blow all my free time on it 19:14:02 (if not, I'll close the MR and throw it away, but I think it's quite nice personally) 19:14:38 https://salsa.debian.org/debconf-video-team/ansible/-/jobs/1019459 shows how it works in CI 19:14:56 (well, not entirely yet, still need to fidget with things a bit, but eventually I'll get there) 19:16:01 I could imagine it being nice for testing things working with each other 19:16:10 but there's going to be a number of problems to solve, to get there 19:16:15 (e.g. SSL certs) 19:16:46 the defaults in CI should be the LE testing infra 19:16:49 SSL certs I'm currently just disabling on the vagrant setup 19:16:55 LE? 19:17:00 let's encrypt 19:17:13 https://letsencrypt.org/docs/staging-environment/ 19:17:25 oh, right 19:17:30 I think the idea is interesting 19:17:40 you won't actually be able to get certs without being public, though 19:17:41 my Vagrant VM does not have public internet, so LE can't work, at all 19:17:48 staging or not 19:17:52 so you'll need to use snake-oil or something 19:17:53 that's why I disable it entirely 19:18:00 and then make everything else ignore the failures 19:18:55 #info wouter has been working on a CI integration setup with Vagrant: https://salsa.debian.org/debconf-video-team/ansible/-/merge_requests/306 19:19:07 #topic Next meeting time 19:19:13 okay, at any rate, it's not ready yet, but I'll continue working on it then 19:19:16 do we need weekly meetings? 19:20:05 maybe? depends on whether the miniconf dates are going to change I'd say 19:20:11 going back a couple of items, what's the next step for the permanent infra? calculating a budget and getting DPL approval? or discussing the infra stack on the ML? 19:20:30 I ask because that seems the logical work that the next meeting would cover 19:21:06 tumbleweed: I'm trying to fudge that in with debian.net stuff currently, but a seperate request from videoteam would be helpful too as a fallback 19:21:19 it would be nice if someone could talk with the cloud team to see what can be sone 19:21:39 the cloud team doesn't seem to be ready for discussions like that 19:21:51 the problems are at higher levels than: We have resources ask us for access to use them 19:22:21 so yeah, I guess making a budget and sending it to the ML first would be a good idea 19:22:37 once we agree there with a hosting plan, we can go to the DPL 19:22:53 if this stuff is a dependency for the MDCO, let's keep meeting weekly then 19:23:08 wfm 19:23:16 same 19:23:24 (well, unless I forget again :-/ I'll try not to) 19:23:45 #agreed Next Meeting: 29 Sept @ 18:00 UTC 19:23:45 put a reminder in your phone 19:23:54 #endmeeting