15:01:36 #startmeeting gitlab check-in 15:01:36 Meeting started Tue Dec 17 15:01:36 2019 UTC. The chair is gaba. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:36 Useful Commands: #action #agreed #help #info #idea #link #topic. 15:01:42 anybody else for this meeting? 15:01:49 agenda: https://pad.riseup.net/p/pKCLewB9RpyjJfRP6X26 15:01:52 o/ 15:02:42 hiro / anarcat maybe? 15:03:01 * anarcat waves 15:03:16 i'm here 15:03:17 ok 15:03:22 * anarcat loads agenda 15:03:37 feel free to add anything else there 15:03:58 added omnibus 15:04:03 sorry, got *lots* of stuff on my plate 15:04:16 anarcat: no worries, i think this will be mostly a summary meeting on where we are 15:04:28 gaba: want me to start with where i think we are? 15:04:30 i added some stuff 15:04:39 cool! 15:04:42 first item on the agenda is where we are at with the migration. 15:04:48 ahf: do you want to give an update? 15:04:53 yeah, i can do that 15:05:07 #topic migration status 15:05:18 so we did a complete migration of every ticket from trac to gitlab. we then asked everybody in the org to report in if they have spotted anything that should be fixed 15:05:28 and people, from pretty much all teams, have been really good at finding things 15:05:35 dcf found some pretty good issues too 15:05:55 i've worked on some of the cosmetic ones and are preparing for the next run of the migration tool this week to see if things are looking better 15:06:17 there is some of the issues there that we wont be able to solve where we do lose some features of trac (see dcf's #commentXXX issue on the pad for example) 15:06:39 but none of the things we cannot do seems to be around loss of information, which is good 15:06:48 i think that is a summary from me so far 15:07:33 i think that might actually be item #1 and #2 on the agenda here :-S 15:07:43 ok, sounds good. There are some comments on the pad (mostly from me and teor). I'm guessing you went through those 15:08:08 #2 is about the bug when trying to do PRs 15:08:13 I am here 15:08:51 yeah 15:09:04 ahf: how do you feel about the migration's progress in general? 15:09:09 related to the migration: you will run it this week and we can follow up when we are back in Jan. 15:09:43 anarcat: good i'd say, i think the list of cosmetic items were a bit larger than i would have thought, but not terrible. the API interface and pulling things from trac is very smooth 15:09:57 and i feel i understand gitlab a lot better now, which i think was a big part of the excercise too 15:10:12 i am happy we didn't set migration date as medio december though 8) 15:10:21 yep gaba 15:10:31 damn right :) 15:10:59 so, for the item with 503 when doing PRs: that has been the biggest problem so far i think 15:11:12 hiro have been spending a lot of time debugging it, i have spend some time on it, and we still haven't found a solution 15:11:25 I wish gitaly would tell us more 15:11:33 but one thing we have come forward to is that we don't really understand the salsa setup and we both have some experience with the omnibus interface 15:12:29 are we done with the migration status? :) 15:12:32 that take us to item 2. and 3. 15:12:34 and moving to the merge request? 15:12:35 yeah 15:12:38 yes 15:12:43 #topic merge request issue 15:12:45 merge reuqests and omnibus 15:12:48 yep 15:12:54 anarcat: nice with this #topic thing, i have never used that in our meetings 15:12:56 i guess that's the same point as omnibus 15:13:03 ahf: yeah, that gets outlined in the published minutes 15:13:08 smart 15:13:11 i figured i would use the bot since it's around 15:13:20 sorry gaba if that steps on your toes ;) 15:13:26 no problem 15:13:27 also the link idea and agreed commands are nice 15:13:30 merge request is the topic now :) 15:13:36 hehe 15:13:57 i think the summary is we don't know what is wrong there :-) 15:14:09 and we can throw more hours at it, which i don't think is very fruitful 15:14:24 agreed 15:14:28 let's give omnibus a shot 15:14:33 i think the solution is that we move to omnibus that we know 15:14:40 but we do not know if the problem is salsa. We would remove the salsa setup for other reasons, to be able to mantain it better, right? 15:14:42 i think we have at least 4 people in tor that have experience with running omnibus 15:14:50 the question that remained, iirc, is how we proceed with that 15:15:01 we know that none of us have experienced this with GL outside of salsa 15:15:06 gaba: i am not sure 15:15:26 ahf: that could be just coincidence 15:15:36 i know it doesn't look like it, but that should be kept in mind 15:15:45 i think we should consider setting up a separate VM for this test 15:15:52 +1 15:16:21 uhm so we have an ad-hoc setup that allows all the different component of gitlab to communicate 15:16:22 yeah 15:16:42 we have VM capacity to set one up? 15:16:43 and we have an issue for which one of the components (gitaly) doesn't work in some situation 15:16:59 also, i don't remember the story of the name 'dip', but should we consider calling the new one just gitlab.torproject.org ? 15:17:08 i think the story is related to that debian instance is called salsa 15:17:25 yes 15:17:27 I think that we have a high priority that moving to a standard setup for all the components should solve our problem 15:17:30 i think that's rather silly and confusing 15:17:43 user-facing names probably shouldn't be in-jokes 15:17:49 catalyst: agreed 15:18:03 agree 15:18:04 ahf: VM capacity is kind of what i wanted to discuss in #5 :) 15:18:17 the short answer is: suuuure, we can spare a VM :) 15:18:21 s/priority/probability 15:18:33 but the long answer is: you can't actually use it to migrate all of trac and git-rw and gitweb to it :p 15:18:39 (it's the same answer for the current dip as well, btw) 15:18:56 so yeah, we can have a VM do play with like we have now, but we can't really use it :) 15:19:11 ah, because of disk space? i hadn't even thought of that, anarcat 15:19:22 hiro: it's worth a shot. i'm not sure what the probability is, but it's worth trying the change :) 15:19:28 ok. everybody agree on going this way with a new VM to test gitlab without salsa, right? 15:19:33 ahf: disk space, memory use, I/O, CPU usage, all of the above 15:19:36 gitlab is massive 15:19:49 gaba: i think so! 15:19:54 gaba: yeah 15:19:55 ok 15:19:56 we are also a lot of people using it at the same time 15:20:05 hiro, catalyst: you ok with this? 15:20:10 yes 15:20:10 but, we are gonna keep dip running at the same time, right? because i use dip for the migration tests (until we have something new i think) 15:20:29 yes I need it running 15:20:30 we would have to migrate from dip into whatever once we see that it will work better 15:20:51 wow, this is getting complicated 15:20:52 gaba: sorry, i can't tell what the proposal is? 15:20:58 the test instance and the test of the test 15:21:10 catalyst: a new VM to test gitlab without salsa 15:21:30 and we do a trial migration to that? and it's separate from dip.tpo? 15:21:31 so that connects with one of my concerns 15:21:43 which i haven't brought up as a separate point, but seems relevant here 15:21:48 it seems we're using the test dip instance in production 15:22:10 i'm worried about that 15:22:17 and then if it works we use that and not the salsa ansible setup. The question is what "it works" means. Are we deciding to do it now because it will be better for us to mantain it? 15:22:19 exactly because of the situation we're in now 15:22:24 yes, I'm worry about that too 15:22:41 i asked for a big warning to be posted on the site when it was put online 15:22:42 it is suppposed to be a test 15:22:45 and there was one for a while 15:22:50 anarcat: i do think we have been saying that everything on dip is for testing purpose, but we have also spend some time making sure to care for the people who uses it that it works 15:22:52 there is one at the top 15:22:54 but now all that's left is "Canonical locations for source code https://gitweb.torproject.org/ and ticket system https://trac.torproject.org" 15:22:58 that's not a warning 15:23:09 that's a friendly informative and unreadable phrase 15:23:14 we removed the bottom warning during a network team meeting not that long ago 15:23:19 since it broke horizontal scrolling :-) 15:23:19 that people are already disregarding 15:23:20 ironically enough 15:23:26 ahf: okay well, that wasn't a good idea i think 15:23:33 i'm not saying it's the reason why that happened... 15:23:38 ... but people are using dip in production now 15:23:46 no, it was used like this before we removed that warning :-) 15:23:49 Maybe we should send a mail to internal reminding people that we still did not migrate to gitlab. To please still use other systems that are "official" for Tor 15:23:50 and that kind of screws us up now, because we can't easily start from scratch again 15:23:53 ahf: yeah probably 15:24:11 gaba: i'm not sure that will be enough 15:24:14 network team only has been using it for roadmapping 15:24:17 we've given people access to the candy store 15:24:17 looking at it another way, people see enough value in it to start using a testing service for production, even in a limited way 15:24:22 and now we're telling people to not eat from candies 15:24:26 metrics is using onionperf there 15:24:33 i think the amount of data we can do from moving from dip to a new instance is pretty small and we can ask the teams to do that themselves 15:24:34 anti-censorship team is not really using it 15:24:39 the web team is using it 15:24:46 the web team is the big user i think 15:24:50 i think basically everyone but TPA has already started using dip in production :p 15:24:53 I think the db and files can be easily migrated 15:24:54 but the web team also have hiro who knows tihs system 15:25:12 so migrating data is possible 15:25:15 that's not my concern 15:25:18 my concern is canonicality 15:25:22 if that's even a word 15:25:25 if we setup a new instance 15:25:30 where's that data supposed to end up? 15:25:38 what do you mean anarcat? 15:25:47 i mean that we'll have two copies of the data 15:25:51 where will people keep using gitlab? 15:25:54 which data? 15:25:59 or are we saying everyone will stop using gitlab? 15:26:09 anarcat all the code is redundant from tor git all we would lose if we were to shut this down would be a few tickets 15:26:11 gaba: issues, roadmaps, pull requests, git repos, whatever people are doing on gitlab now 15:26:24 that are probably already copies of trac tickets 15:26:27 hiro: i think we're underestimating the impact of such a shutdown 15:26:34 but i'd be really happy if we could just try that out now 15:26:37 clear out the database 15:26:39 and see what happens 15:26:51 please don't clear out the database :) 15:26:52 i would be ready to bet that (1) people would yell like crazy and (2) no one would even dare to do this now 15:26:57 well there you go :) 15:26:59 this shit is live 15:27:02 yes 15:27:03 it went in production 15:27:06 without us calling it 15:27:10 i think we just migrate the data from one instance to other 15:27:14 and then we shut down dip 15:27:27 yeah but ahf wants to keep dip to play with the migration 15:27:32 it's not my experience in tor htat people would yell like crazy :-) 15:27:38 yes, it was supposed that people/teams were testing it but not really using it 15:27:43 maybe we didn't yell that enough 15:27:43 i think people would understand this, and we would give them time to move things over, and they would do that 15:27:46 ahf: you should join TPA :p 15:27:57 and maybe feel a bit counterproductive that day, but hey, there is some admin overhead in everything we do 15:28:03 anarcat: :-S 15:28:08 it's not really production, but if you have to test things you need to use things too 15:28:11 but ahf can use new instance for testing migration 15:28:14 we do not have to keep dip 15:28:29 no, we put a deadline on when people need to have moved off dip (IMO) 15:28:36 maybe i'm worried about nothing 15:28:39 we can even keep dip domain into new instance 15:28:52 let's just say i'm worried about this, and move on :p 15:28:53 there are tools to migrate data from gitlab instance to other 15:28:55 is not complicated 15:29:07 then i can get to say "told you so" later 15:29:10 i hate doing that :p 15:29:19 i think it is a vald concern anarcat, so keep raising them 15:29:22 i still do not understand the worry anarcat 15:29:26 it might be you have to do a told you so later though :-P 15:29:39 that have happened more than once in this migration project and i'm sure it iwll happen again :-D 15:29:43 heh 15:30:00 gaba: i'm sorry i can't frame this properly :) 15:30:13 Hi 15:30:15 my concern is that we're now in a state where we have production data in a test instance 15:30:22 we're talking about setting up a new instance 15:30:23 Just reading backlog 15:30:28 and migrating data to that new instance 15:30:30 I think anarcat is worried there might be some distruption between migrating from trac and migating again from dip 15:30:35 so even before we started doing the actual migration 15:30:39 Ok 15:30:41 and lost of data 15:30:42 we're already doing a migration of data 15:30:47 hi pili ! 15:30:52 :) 15:31:02 we weren't supposed to have prod data in there 15:31:08 I see 15:31:11 yes 15:31:12 and we were supposed to be able to pop that thing in and out of existence 15:31:17 to, say, try omnibus instead of salsa 15:31:21 without having to do a migration 15:31:24 can we do a survey of dip users to see who has data there that they're not willing to lose? 15:31:27 now we have to do a migration before the migration :) 15:31:40 catalyst: but see that's what i mean, more extra work already :) 15:31:43 that's what i was worried about 15:31:48 we don't need to go in details over this 15:31:51 for network and anti-censorship teams is mostly copy to trac tickets into the boards 15:31:52 we have prod data in there 15:31:53 I could live with losing data and I think I’m a big user 15:31:54 we'll deal with it 15:31:58 "told you so" :) 15:31:59 for metrics team they are using the onionperf repo 15:32:06 i think people have understanding of this being a test instance. i don't think we just need to give people time to move from one instance to another 15:32:27 We’re only using the project management features 15:32:44 all issues are in trac, right pili? 15:32:56 We did close some tickets in trac to move to dip 15:33:18 I don’t think it was a huge amount though 15:33:21 we considered doing so in TPA as well, just for the record 15:33:33 we decided against it, there could have been a fluke there 15:33:40 even us are unclear about the state of affairs in think :) 15:33:41 There is some stuff that is only in dip right now though 15:33:43 anyways 15:33:49 i feel i totally hijacked the conversation, sorry :p 15:33:50 yes, we all just thoguht gitlab in dip was working 15:33:59 my point was just that if we setup omnibus, we need to think about migration and support 15:34:03 which instance are we going to support 15:34:12 and are we going to keep the salsa/dip instance around 15:34:22 and how are we going to migrate the data 15:34:25 migration between gitlab instances is very easy and i think once we do that we just turn down dip 15:34:34 as hiro said, we can just copy the DB over and start from there 15:34:41 yeah and there are tools 15:34:48 it's no biggie really 15:34:55 we need an instance of gitlab that we can mantain 15:34:55 but if we do that, we have two gitlab copies, and we either need to clear dip, or lock it away to only ahf, or shut it down 15:35:10 shut it down would be my option 15:35:11 i don't need it, i just need it until the other one is setup :-) 15:35:14 alright, no biggie :p 15:35:23 alright 15:35:24 i have a qubes VM locally with a gitlab instance i also test things with for speed 15:35:34 so it's not even that bad, it just allows other people to see progress that i test against dip 15:35:45 then we shutdown dip, once it's migrated to an omnibus instance... and that we verified the PR issue doesn't come up? 15:35:54 yeah 15:36:01 in that order 15:36:04 quite importantly 15:36:25 yeah 15:36:26 ok 15:36:31 wrote this in the pad on the notes 15:36:35 sorry again gaba i kind of screwed up the agenda :p 15:36:57 don't worry anarcat. I think is important to bring all concerns. This is why we are doing this meeting 15:37:20 yeah! 15:37:28 who is going to do this? 15:37:36 do we do it before or after holidays? 15:37:45 awesome 15:37:46 ha 15:37:51 *after* 15:37:54 definitely after 15:37:57 :) 15:38:03 at least i'm not going to touch this before january 15:38:08 no way no no no :p 15:38:19 i think after too, early january ideally 15:38:21 but if hiro and ahf want to work on this over the holidays, have fun :p 15:38:23 and i can continue to use dip 15:38:34 no no, on friday i will be in non-work mode i think 15:38:35 ok 15:38:36 it might be a good time, if you don't believe in the holiday thing, because then you can break stuff freely because everyone is gone 15:38:45 but i am outta here on friday evening 15:38:48 holidays 15:38:49 ditto 15:38:50 i miss em 15:39:11 hiro, anarcat: you two will setup this vm and do the migration? 15:39:23 https://www.youtube.com/watch?v=eBShN8qT4lk 15:39:38 I'll do that probably gaba 15:39:43 thanks 15:39:50 hiro: awesome, i can help if you run into weirdness 15:39:55 but not during the holidays 15:40:02 I'd be chillin 15:40:04 awesome 15:40:19 great! 15:40:28 hiro: i'm thinking we should put more of this in Puppet as well so that might require creating a new profile and so on 15:40:37 hell, maybe there's already a puppet module we can use, we should ask micah 15:40:38 do we move into 5. hardware requirements ? 15:40:47 #topic hardware requirements 15:40:54 yeaah :) 15:40:56 anarcat yes I have been looking at the puppet roles for dip and there will be some stuff that needs to change 15:41:05 hiro: we should make an entire new role 15:41:12 hiro: we can talk about this later :) 15:41:30 so that connects with my "wait, we're live now?" kind of wake up moment last week :) 15:41:39 :P 15:41:50 i realized we kind of just created this dip VM without thinking too much of hardware requirements, when compared to git-rw 15:42:00 because we were assuming git-rw would stay around forever 15:42:26 and i suspect that might not be the case - one of our blockers right now is, after all, merge requests which means people *are* using the git part of gitlab 15:42:33 which is not very surprising when you look at the name 15:42:34 git 15:42:35 lab 15:42:36 :p 15:42:37 anyways 15:42:58 so i'm worried people will slowly but eventually converge over "everything git is on gitlab anyways" with time 15:43:05 uhm so the idea with gitlab was that it would provide an easier way to visualize merge requests 15:43:17 and i'll have to deal with problems like "i can't clone the TBB git repo" on *both* git.tpo and gitlab.tpo 15:43:22 but I do not think people want to move from git-rw 15:43:27 hiro: yeah but to do a MR, you still need to host the git repo 15:43:30 yes, the idea is to still keep git-rw BUT I agree that people may slowly move to gitlab as things may get easier 15:43:33 for me that's the worst of both worlds 15:43:54 even if we do want to stick to the plan, which seems to be to keep both git servers running (and that i find questionable)... 15:44:01 we may have repors that will still use git-rw and others that will move 15:44:02 ... we still need to give more punch to this setup 15:44:20 i think we should consider the hardware requirements of this project more serioously and specifically 15:44:37 gaba: right, i understand this now :) 15:44:43 i think people will slowly move their things to gitlab too over time 15:44:52 i was under the impression we wouldn't use the git bits of gitlab at first, and that didn't make sense to me :) 15:44:54 because it will be something they can just do themselves and it's "easier" 15:44:55 now it makes more sense 15:45:00 yeah 15:45:07 people will migrate to this instantly 15:45:14 is my opinion :p 15:45:17 but whatever, that doesn't matter 15:45:32 the rhythm of migration is not my concern as much as having large repos or a large number of repos hosted there 15:46:01 i'm wondering if we have budget to setup new machines for this 15:46:09 or if this just comes out of my poor old TPA budget ;) 15:46:14 network team said they have concerns about moving out of git-rw into gitlab so I do not think that will happen any time soon 15:46:25 i'm looking at https://grafana.torproject.org/d/Z7T7Cfemz/node-exporter-full?orgId=1&var-job=node&var-node=gitlab-01.torproject.org&var-port=9100 15:46:35 which is the gitlab-01/dip grafana dashboard 15:46:37 anarcat: i would guess it will up from your poor old TPA budget :) 15:46:40 i thought people were planning on using gitlab similar to how many of us are using github -- merge requests/pull requests in the nice web front end, but actually perform the merges on git-rw for security reasons 15:47:01 catalyst: yes, that is what I understand 15:47:10 for Core Tor 15:47:20 this thing isn't live yet and and we're 26% CPU busy, 70% used memory (out of 6GB), 30% filesystem used... 15:47:29 (well it's live, but you know what i mean) 15:47:29 uh 15:47:42 everything below 100% memory usage is wasted RAM 8) 15:47:45 can you pm the username and password for grafana? 15:48:01 but ok, those data points does say something 15:48:08 thanks 15:48:12 i pm'd gaba, happy to pm the sekrit to others here, let me know 15:48:23 gaba: i'm not worried about tor little t, to be honest, it's not that big of a repo 15:48:49 I agree that it may be good to think about hardware requirements now 15:48:51 gaba: i'm more worried about N teams having X users with each their own fork of P projects (so N * X * P repositories :p) 15:48:52 catalyst that's what I wanted too 15:48:57 in theory, gitaly was designed to help with that 15:49:06 do anybody know what is the configuration for other gitlab instances that are use the same way? 15:49:11 but i'm wondering if it might be worth splitting up our setup so that gitaly is on a different VM for example 15:49:22 in any case, we'll have to bump the hardware requirements of this in the short term 15:49:35 my point was mostly a "heads up, i'll need money for this" kind of warning :p 15:49:36 ok 15:49:47 anarcat: can that money comes out of TPA budget? 15:49:49 maybe it just comes out of the TPA budget, but it's good to know if that's the case 15:49:52 gaba: i think so? :) 15:49:59 it really depends on what it will turn out to be 15:50:09 just for completeness sake... 15:50:19 catalyst: i think your assumption is correct as well... 15:50:23 anarcat: can you figure out what we need and how much it will be? 15:50:24 i have no idea how a split gitlab setup would be. i don't know it well enough to say anything about that 15:50:33 my problem was that i didn't realized that would involve hosting those git repos in gitlab 15:50:44 ahf: i think that's what gitaly was designed for 15:50:49 but i guess it's something we should think about 15:50:55 what's gitaly 15:50:56 i could see if there are upstream docs for that kind of stuff 15:51:01 catalyst: 15:51:12 catalyst: gitaly is the "serve git repos" components of gitlab 15:51:19 catalyst: the git-serving component in gitlab 15:51:20 component* 15:51:25 anarcat I think storage can be on different machines... but I am not sure since gitaly has been integrated more and more into gitlab how much it helps to slips the two things now 15:51:35 okay 15:51:36 we'll see 15:51:39 anyways 15:51:42 it was just a heads up :) 15:51:42 I might be wrong tho 15:51:58 but going through the code a lot has been migrated into gitlab in the last months 15:52:14 is performance going to be acceptable with network storage? 15:52:25 this graph is also fun https://grafana.torproject.org/d/ER3U2cqmk/node-exporter-server-metrics?orgId=1&var-node=gitlab-01.torproject.org:9100&fullscreen&panelId=19 15:52:28 well like debian uses network storage over google 15:52:35 catalyst: that's an excellent question, i'm not sure 15:52:38 hiro: for gitaly?? 15:52:47 i thought it was just for docker images and such 15:52:58 I don't remember anarcat 15:53:13 well anyways 15:53:14 I need to check the ansible stuff 15:53:23 this thing is all cloud-friendly-blabla 15:53:30 it's designed to run in the cloud, so presumably it should work :p 15:53:35 but i don't trust that cloud stuff 15:53:36 so who knows 15:54:26 i'll remove point 6. git-rw migration, we talked about this and we'll stick with the current "it will exist for ever" illusion for now :p 15:54:32 ok 15:54:33 i'm done with the hardware 15:54:41 i guess the conclusion is me and hiro will sit down and think about it :) 15:54:52 ok, thanks! 15:54:53 hiro and ahf have more experience than me hosting gitlab anyways, so they probably know more than i do! 15:55:12 i would also advise doing a survey of existing gitlab instances to check their hardware reqs 15:55:15 i don't look at machine health and things like that though. i will happily admit that i am a terrible sysadmin :-S 15:55:20 i use a hammer for everything 15:55:33 :) 15:55:37 0xacab, gnome, debian, all run gitlab in various forms, and will have interesting info and warnings we should ask them about 15:55:42 anarcat: i can ask around for some projects i know that uses self-hosted GLs 15:55:47 OTR also recently moved to GL i think 15:55:54 ahf: awesome 15:57:04 cool, we are nearing the hour 15:57:17 next steps is the last time? 15:57:25 we use gitlab at systerserver but is not heavily used 15:57:32 anything else that we should consider? 15:57:36 i think we have a good idea on next steps based on all the stuff we have discussed here :-P 15:57:41 yeah 15:57:46 so next steps is: 15:57:47 1. holidays 15:57:48 :) 15:57:49 i continue with the migration stuff on dip, then in january we begin the omnibus work 15:57:56 hey, i have friday before holiday too :-P 15:58:02 2. continue migration tests 15:58:03 heh 15:58:08 3. omnibus setup 15:58:09 but yes 15:58:11 4. omnibus migration 15:58:16 5. more migration tests? :) 15:58:28 sounds good 15:58:29 oh and 3. includes "hardware provisionning checks" 15:58:36 or whatever we want to call this 15:58:37 yeah, the hw part seems important too 15:58:39 Adding all this to the pad. Please look at that to be sure we all agree 15:58:54 awesome 15:59:31 let's called the omnibus instance gitlab-02 15:59:38 gitlab-01 is the current "dip/salsa" instance 15:59:46 ok 16:00:33 Anything else? We are in the hour of this meeting. 16:00:50 * ahf is good 16:00:53 all good 16:01:03 thanks gaba ! 16:01:12 ok. Let's finish the meeting then. 16:01:15 #endmeeting