17:11:39 #startmeeting snapshot #11 17:11:39 Meeting started Mon Dec 16 17:11:39 2024 UTC. The chair is ln5. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:11:39 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:11:51 #topic agenda 17:12:15 i have static updates and closing; more agenda items? 17:12:29 s/static/status/1 :) 17:13:22 wild agenda bashing today! keep calm people 17:14:35 #topic status updates 17:14:57 a second snapshot machine is in the makings; hw ETA: this week, best guess ETA for DSA: early january 17:15:53 (the site will be in the same city as where snapshot-mlm-01 is, but a separate site. so naming...) 17:16:15 iiuc lw07 is handling 10% of the incoming http traffic; mlm-01 is taking the rest (of what passes fastly) 17:16:33 woohoo 17:17:25 i think that pkern[m] would like some more feedback on https://salsa.debian.org/snapshot-team/snapshot/-/merge_requests/28 before merging 17:17:32 As I said I'm unfortunately still distracted. I think there are two open PRs that need merging. One with DB changes, and one for the file attachment stuff. 17:17:48 I merged the apache2 config stuff that was required for the other PR, so that should be unblocked. 17:17:56 yes, thanks 17:18:30 would your work benefit from access to a test system?e 17:18:34 Well I guess we should go and merge that one and then have a followup to actually use the view. That would make lw07 faster and also make snapshot-mlm even faster. :) 17:19:01 Well, I already tested DB stuff on the live DB. /me hides. But locally I only tested with the unit tests. So I didn't test the code. So... yes, probably. 17:19:02 sounds like it won't break anything. before we start using it. 17:19:22 It might break imports because queries are wrong, but we'd notice that, I guess. 17:19:27 *could be wrong 17:19:36 i can test it out on -dev-01, but perhaps not this week 17:19:48 We are still not in a state where we can sanely reboot mlm-01 without an outage, even if lw07 will now pick up the traffic. I think. 17:19:58 ok, good to know 17:20:54 and from h01ger i heard earlier that the reprobuilder people seem happy enough 17:21:51 i'm afraid i don't have an update on my action point of evaluating caching layers in containers yet 17:22:15 We are still serving failures when there are spikes, fwiw. 17:22:33 But with the scrapers neutered, it's only occasionally when something sends a lot of requests at once. 17:23:06 are these failures unrelated to the db incosistency problem? 17:23:39 It's just temporary overload I think. I.e. we get spikes and then serve 503s. 17:23:51 ic 17:23:52 and how do we notice failures? 17:24:26 mind you, so far we only have 2 rebuilders. i hope to have 50 or more eventually. (debian should have 10, for for each arch, and then reproducible-builds.org should also have 10? and some more independent rebuilders...) ; tl;dr: i hope rebuilderd related traffic to snapshot.d.o will soon amount 5-10 times as much as we have now 17:24:39 s#for for#one for# 17:24:56 h01ger: will they use a common (to them) cache? 17:25:29 i hope they will be spread over the world, different jurisdictions and different operators as well. so rather unlikely 17:25:45 fair 17:25:46 they shall use local caches 17:26:33 I see stuff like https://ibb.co/FhzWNP4 17:26:56 Fastly is caching on the edge nodes and not centrally right now. 17:27:18 Other 5xx is the scraper blocker 17:27:37 but hey, if there's one for each arch in half a year, i'm happy too. (and only the amd64 rebuilder builds all 70k binary packages from a current suite. the other arch rebuilders only build their 30k arch:any packages.) 17:28:02 I expect rebuilding to be the same active set rather than requesting uncommon content. 17:28:17 pkern[m]: yes 17:28:45 h01ger: do you generally know a checksum for the packages you need from snapshot.d.o.? 17:28:50 rebuilding a suite (with 70k binary packages) needs about 30k different binary packages in 100k variations in total. 17:29:17 https://ibb.co/z88vNvF looks like the scrapers gave up 17:29:25 ln5: we had this discussion (apt requesting by sha256 sum) before, but i'm atm not sure where exactly this is stuck 17:30:53 h01ger: would you like to try to figure out a good place to have that discussion? 17:30:54 pkern[m]: this was re: "I expect rebuilding to be the same active set rather than requesting uncommon content" - rebuilding a suite (with 70k binary packages) needs about 30k different binary packages in 100k variations in total. so, yes, thats a very small subset of snapshot we are interested in. 100k debs making up roughly 100gb size. (for one arch) 17:31:52 So doesn't quite fit into RAM but also not that far fetched. 17:32:12 (snapshot-mlm having 500G) 17:32:19 ln5: i fear i'm involved in too many discussions already when i discussed this with juliank (apt maintainer) this summer there were some blockers which made it so unlikely that i didnt even file a wishlist bug. 17:32:21 So I'm not really worried about that. 17:32:34 ln5: i fear i'm involved in too many discussions already. when i discussed this with juliank (apt maintainer) this summer there were some blockers which made it so unlikely that i didnt even file a wishlist bug. 17:32:56 And the redirects are very highly cachable 17:33:13 h01ger: ok, np. thanks for bringing it up with apt maintainer. 17:33:48 pkern[m]: snapshot could cache them forever, yes? 17:33:59 ln5: i'll also keep having this on my mind, because it seems the right thing todo, even though current design doesnt seem to allow it 17:34:33 h01ger: current design of snapshot you mean? bc it lacks sha256? 17:35:20 no. i think i mean debian archive. 17:35:41 ic. yes, that's a bit worse. 17:37:40 any other status updates? or other agenda items? 17:38:18 if not, we'll move to closing 17:39:01 #topic closing 17:39:16 for the next meeting i propose 2025-01-20T18:00:00Z 17:39:25 that's one hour later than today's meeting 17:40:37 more suggestions? 17:40:54 #agree next meeting 2025-01-20T18:00:00Z 17:41:02 thanks all 17:41:04 #endmeeting