RIPE 88

Archives

Ripe 88
OpenSource working group
23 May 2024
2 p.m.


MARCOS SANZ: Start finding a seat.

MARTIN WINTER: If you are wondering where you are, you are in the correct Working Group. So stay here.

MARCOS SANZ: Welcome to the Open Source Working Group session. My name is Marco Sanz.

SASHA ROMIJN: I am Sascha Romain.

MARTIN WINTER: I am Martin Winter.

SASHA ROMIJN: So, welcome to the session. This session is live‑streamed and recorded as always. And first I want to thank for scribe from the RIPE NCC, the stenographers. Let's move on ‑‑.

So, I'd like to start with the administrative matters. The agenda from last time ‑‑ sorry, the minutes from last time were published on the mailing list. Does anyone have comments on these that we need to... I don't think so, I'm sure everybody has read them.

Then our agenda for today. We will start first with a review of the Chair selection process from last time by Martin.

Next we have a talk on OpenSource quality assurances processes by Petter, and also has been a survey that you may have seen on the mailing list that was taken over TTL the last fee weeks and the talks will be about that survey and several things ISC is doing. Next we are Maria talking about OpenSource projects. This was something that was already presented the last time and has been continued to be developed.

And finally, we will have two lightning talks, first road blocks to OpenSource in Asia Pacific by Martin and finally the Community Projects Fund by Gergana. And we also have plenty of time for discussion. So, we're hoping that you enjoy TTL talks and have plenty of input.

MARTIN WINTER: So before we start, I have a quick question. This time our Working Group is in paragraph letter to the Routing Working Group. And I was curious how many here, show of hands, is that kind of a hard decision which one to attend?

MARCOS SANZ: That is like ten‑ish.

SASHA ROMIJN: Then, I'd like to welcome our first speaker, Martin on the Chair selection process review from last time.

MARTIN WINTER: So, I assume all of you are aware we had Chair selection last time. And I mean we have our Chair selection procedure for the ones who are not that familiar quite now, there is a discussion going on. Each Working Group has slightly slightly different procedure agreed on. The procedure we have is published on the RIPE website, you can go read it up there. And we tried to follow that basically. And we made, as it said, we asked for candidates. We got like three candidates basically, that was about two months before the meeting, there was a call. And we were mainly at that time two Chairs, we were looking can we get at least a third Chair. We got the three candidates in there and for us the good thing is it was the first time we actually had a choice. So that made me well happy. It was ‑‑ we started this Working Group at RIPE 67, and the only time before we had a candidate was like Marco up a year earlier, and otherwise we never had, and with Marco, with one open seat and one candidate there was no election basically, it was just agreed you join. This time there was actually an election because we had three people which we can pick from.

We had a discussion before these candidates, a bit before. That's actually the idea. We had 14 days to coordinate the meeting. That was part of it, not really responding. A lot of these candidates it turned out way harder for us to contact them, can we have a chat? Because we want to see how you want to do it? Do you still want to run? How do you see it? In our policy we say maybe we can all come to an anonymous decision together and then we don't need to have any voting at all.

The one thing we got confirmation from them, they only wanted to run against the one open seat. They didn't want to topple any of the existing Working Group Chairs. That's a question we always ask. We left that open to the candidates. So, as they all agreed, that was basically three candidates for one open seat.

Then, because we couldn't really get all the discussions before, we ended up making last minute change. And we decided instead of having the discussion at the meeting itself, we do just each of them can announce themselves, explain who they are and then do the meeting within 14 days afterwards on the mailing list. That worked well for the other Working Groups, so we assumed that's not a problem for us.

So, then this is what happened basically. That kind of just when we opened it up, I think noon on November 30, we started it. And then basically if you look at the time stamps, you see how the votes came in, three days over ‑‑ mainly Sasha got a lot of votes, we got a few single votes on the other ones, and we got then a little bit more votes coming in. Everything looked fine, I thought it seemed to be quite clear. Then suddenly it was closing, we noticed that there were a lot of new votes starting to come in. You will see there, so it's just a few days to go. And then, suddenly at the end it's like this happened. So, we noticed at the beginning, it looked all normal and then we also looked a bit more in the detail, who is voting? Because all these names I didn't recognise it. And we started noticing that all these people who voted, nearly everyone of them were new. They basically signed up to the mailing list and then on the same day most of them within a few minutes they voted. So, for me, that's a question of how do they even know about the voting going on? They couldn't have heard it. They never attended a RIPE meeting before, they were new to the mailing list. So we looked at it, how it looked there. And that's how the vote, the blue is like from people who attended RIPE meetings before. And then the red additional ones from people who signed up on the list and immediately voted.

So, the key thing is like, the whole discussion is like, sometimes it's misunderstood. The current rule is it's not a strict vote. It's basically a vote for support. And you basically the people vote there at the end, the Chairs have the final say. Obviously this was something we never planned. So, we had a discussion. And we made a decision there because from the input I brought it up on the list first to ask them, so how did you learn about the voting and all that stuff. We got a lot of philosophical discussion how it should be done from all these people. Nobody answered it, how they learned about the elections, nobody mentioned specifics there, so at that time we decided to disqualify all these votes and ignore them. And basically, that's how then Sascha got elected.

It's, as expected obviously, there were some protests. I was looking if any one of these new members would say that they just accidentally joined up and they really want to be, if any of them is at the RIPE meeting is there, but as of yesterday, none of them have registered either online or on site. If that has changed and one of these is in the room I would love to get some feedback from that person.

Now, obviously this is over. This is fact now. The question is: What about the future?

Mirjam, who is probably in the room over there, from the RIPE, she is working on some ideas. First of all, the confusion that each Working Group has slightly different policy. So she is looking, trying to figure out can we make one policy, can we do that better. Feel free to talk to her, especially if you have a strong opinion one way or another. Tell her how you see it.

From our Working Group, we don't know exactly how to do it, but we think that most likely we will try to force the voting during the meeting again. We need to figure out how we can avoid the same problem again.

So, it creates a requirement that you also, if you are doing during the meeting it means that people who vote have to attend at least one RIPE meeting. So at least one way of voting. And we also had the discussion, should there be a requirement for the candidate as well that they attend at a RIPE meeting before, or multiple RIPE meetings? There are ‑‑ I heard a few different opinions before the Working Group Chair lunch I think there was quite strong favour for that. I don't know how people here think, I would love to hear some feedback. The idea is you're not becoming a Chair just because you know actually the facts or in that industry or something. It's also you should know the community, because it's more a community job and the idea. So you should know how it runs and if you have never attended a RIPE meeting, that might be challenging.

Then we also had during this time on the voting in the past, it wasn't a rule but because we had a mailing list, me and Marco. We do not state our support on the mailing list because we think that would be unfair influence for a candidate. So we had brought our own opinion, and ‑‑ but we basically kept that hidden. Also that would obviously that thing, even at the end, if it was getting close, having a lot of impact there too.

So, with that, I would love to open the floor and I would love to hear in the last few minutes in a few minutes, what other opinions you have.

SASHA ROMIJN: Thank you, Martin.

AUDIENCE SPEAKER: Andrei, ISC. I think this is reasonable to require the future Chair to attend the meetings, and I also think that there should be a commitment to attend the future meetings. Because it does make sense to have a Chair that does attend RIPE meetings. I don't think all of them but something like at least one a year or something.

MARTIN WINTER: So from that, the RIPE current rules from old time back basically states that basically one Chair should really attend each ‑‑ one of the Chairs should be at each physical meeting as a minimum. So it is ‑‑ it creates a pressure that you should attend and even physically attend some of them. And when you say, like, you support the idea that they should have attended a RIPE meeting before, do you have an opinion about on‑site or potentially online only?

AUDIENCE SPEAKER: I think that on‑site makes sense. I know that it like reduces the pool of possible candidates but it is important to know the community because this is not like you are a servant, this is not something ‑‑ this is not an honorary role or something, you should know the people and you should be well known in the community that's here. So... but maybe it will solve itself just by like changing the voting process to be during the meetings, because people are less likely to vote for a Chair that they have never seen before, right? So...

MARTIN WINTER: Thank you for your opinion.

AUDIENCE SPEAKER: Jim Reid. Speaking for myself: I have quite a few comments to make here. So first of all, the idea that we have a unified Working Group selection process imposed on the Working Groups, in my view, is wrong. We are a bottom‑up organisation. Working Groups should be left to their own devices to decide how they organise themselves. And as long as they have got a fair and open selection process, how they choose to do that is, largely speaking, up to them. It shouldn't be imposed from above. I think that's very, very, very wrong.

However, let's get down to business.

I am tempted to say I told you so. Because for a long, long, long time, I have not been in favour of voting for things like Working Group Chairs selection precisely because of the problem that you have just outlined, Martin. The system is very easily gamed. And we have seen that happen. Someone in their imaginary Facebook friends just all of a sudden pop up in the mailing list and say we will vote for the same guy or girl, and it might be they are all approximates for the same person. It might even be the same person. The mechanism you have got there, you have got the wisdom of the existing Chairs of the Working Group to decide when somebody is acting in bad faith and take action and in this case did you take action and I praise you for the decision that was taken. This is clearly somebody that's acting in bad faith, trying to game the system and you disqualified them as a candidate. So I think that mechanism is perhaps the best way forward in terms of dealing with people that are trying to manipulate themselves into position of influence or imagined power. So I think that's fine. Maybe you want to check with the RIPE Chair or the advice a Chair who says we think there is some monkey business going on in the selection process here around the consensus view of the Working Group Chairs inside that particular Working Group is, we would like to disqualify that candidate and run it past the RIPE Chair or the RIPE Vice‑Chair to get a second opinion on that to make sure that you are doing something that is reasonable and a fair way of dealing with the mechanism.

So I think that's probably the best way to do that.

My next point observation is talking about this in terms of introducing new complications, you know, you have to have attended so many meetings before that or you are going to give a commitment to show up at particular meetings, I think this is the wrong approach. You are turning it around ‑‑ this is turning things around the wrong way, we are creating an elaborate complicated mechanism, we are all engineers, we love to do that kind of thing. We really should just apply common sense. If someone is appointed as a Chair, and they are not doing a good job, it's up to the Working Group and the other Working Group co‑chairs to say to them you are not doing the job, you are not pulling your weight step down. We don't need to have complicated mechanisms. Oh you have only been to three of the last five meetings. No. Just apply common sense. The more you try to create more rules it's just going to mean that we have more mechanisms for people to try to figure out how to game the system. It's a zero sum game. You are never going to win that race. In a my view the game there is don't play them.

MARTIN WINTER: Thank you for your feedback. I really hope you speak to Mirjam even if you don't support unified policy but I think you may get more or less at least a draft for the different Working Groups to make a decision on. Thanks.

MIRJAM KUHNE: Just the one data point. I think just to clarify, when you say voting in your representation, you actually didn't mean voting you really involved the old process where people on the mailing list were asked to provide statements of support or nonsupport for the candidates, which kind of turned into a bit of a plus one, right, that's what happens. And that's why I think you are bringing it up because the process that we have been doing for so long in this community didn't work in this case. I just wanted to clarify this because it actually wasn't really a voting in terms of I don't know using a tool or an election or something. It was really meant to be, to provide statements of support on the list.

MARTIN WINTER: Thank you for the comment. So, yes, yes, I talked about voting. We talked before on voting all the time. For me it's more like a vote for supporting that candidate. It's not a strict voting, and in the current rule it's not. But yes that caused a lot of confusion and we may have to be careful and make that clear.

AUDIENCE SPEAKER: Hello, with a little bit of knowledge and experience from a different region of RIRs. I just want to say that if there is actually some sort of a NomCom which is actually available in APNIC region, they will actually do some sort of a background check, who has actually a statement of intent for the people who are actually trying to run for a Working Group Chair or co‑Working Group Chair. The statement there is mentioning for they have actually attended, it has to be verified with the RIPE database database, so it needs to be a community effort with that small team of two or three people one from RIPE NCC and two from the community who can actually verify that statement and also verify that they have actually attended a previous RIPE meeting. So, this is what my proposal is. Thank you.

MARTIN WINTER: Thank you.

AUDIENCE SPEAKER: Maria, cz.nic and BIRD. I'd like to add some notes. First of all, please don't stick to the common sense, because the common sense is not common. And typical doesn't work in stress. I have gotten in lots of situations where I thought I was using common sense, and I was just using my first planned stress reactions. This is not a good idea. And there are processes which are exactly for this situation. We should have processes not common sense. We should use common sense for creating the processes and thinking about the possible problems up interest from. I know we are going to always fight the last battles and the lost battles but it's better to fight the lost battles than to fight nothing and just think about the common sense.

Just a suggestion. For the additional qualifications, I would like to see those qualifications as a part of biography or something like that, because even though I see lots of people here I know who is here, who is not here for lots of situations. I don't know everybody and I don't want to be unfair for that person not sticking around me.

MARTIN WINTER: Just a quick clarification question. When you say like the common sense for the rules and then afterwards not using common sense. That sounds to me like having much more strict rules and trying to plan for everything ahead of time. Am I correct?

MARIA MATEJKA: I would like to not specify that we are using common sense. I would like to rather see a bit more policies. They don't have to be strict, but there should be some, and the rules look like a correct way to go. We should not...

AUDIENCE SPEAKER: Shane Kerr: I wasn't going to say anything, but I had to stand up because basically I disagree with everything Jim said. So, I do think having consistency between Working Groups will not a hundred percent is a good idea because from an outsider to the community, having to understand that this Working Group selects things in this way and this way and another is confusing, and reduces the legitimacy of you as Working Group Chairs and us as a community, which is also important in terms of some of the other suggestions, which I kind of disagree with. In terms of giving the existing Working Group Chairs say and who gets to be candidates, creating a NomCom, having these kind of gateways to put yourself forward, I find it's going to be a bit off putting and also kind of again reduces the legitimacy of the current Chairs, because it makes it feel to an outsider like it's a closed Kabal trying to get their friends involved and keep outsiders out. I'm not saying any of that is actually what happened. I think perception is important.

And I think we shouldn't necessarily be terrified of voting. I think the idea of having consensus driven stuff is interesting but I think having consensus causes just as many problems and we get back to the point of who decides who the consensus is, if it's the Chair, then we have a legitimacy question there as well. So, that's ‑‑ and now as far as the specific questions, I kind of like the idea of saying that we want to make sure that our Chairs ‑‑ sorry, I'm done. I am cut off.

MARTIN WINTER: Just as a reminder again, Mirjam is probably happy, if you line up and talk to her and have a discussion with her in more detail. Thank you.

(Applause)

SASHA ROMIJN: So, next I would like to welcome Petter, who will speak about open source risks, perception and mitigation.

PETR SPACEK: Hello everyone. I work with ISC. I wanted to say that this is a joint work with evictey, she couldn't be here but it's a joint work, and also I have to thank wonderful Chairs we have got because they allowed us to run sort of an experiment because we started with a survey among the members of this Working Group and also in other places and in the first part of this talk we will present the results of the survey. Then we will compare that with the results of what we have got in practice in BIND 9 DNS server, and then hopefully we will sometime for discussion.

So, thank you, Chairs, and also thank you for helping me to run the survey because you were instrumental to getting the answers.

So, survey:
All the details or the statistics for the survey can be found on this URL. We will have just a short version here for the sake of time, but you can read all the data on the Internet.

So, survey: It started with a difficult question: What makes a project trustworthy? Here we were asking very specifically about the mission critical applications in a sense that is like not a random, you know, hello world application, but something you need to run your business. And if you are selecting software for mission critical application, what's the top priority, what do you consider is the most important factors when you decide to use that project. That was the first question. Then we went on to like how dedeploy and verify and so on.

Second important aspect of the survey that it wasn't a random Facebook poll or something. It was sent only to basically an expert mailing list, and in these places we would expect expert users, not again random Joe asking about random stuff. So if someone just installs the package from the distribution and copy‑paste this configuration from somehow to, that person is unlikely to participate in those mailing lists. So the assumption is more or less we get kind of expert answering about the Commission critical software. And over a period of one week we have got 71 answers, like valid answers. And second thing is that because of the selection the mailing lists and the channels we used to reach the experts, the answers were heavily towards the DNS software. Approximately two thirds of the answers were about the mission critical DNS software. But the good news is, that when I compared answers for DNS and non‑DNS as two groups and looked at the charts, the answers are mostly the same. You know, the non‑DNS people think more or less in the same terms of the DNS people. So, here we will present it together as one group.

Back to the hard question: If you are selecting software for mission critical application, how do you decide? What are the five most important factors you consider most important when you are selecting the project?
.
To my surprise, I have to say, we have got these five top answers. Documentation in first place quite clearly, and active and kind of helpful community, you can say. Also, the way how the project releases and maintains the versions. And also, on the fourth place, the familiarity with the software, which kind of shouldn't be probably a surprise because if you are familiar with the software and using it for sometime, you kind of gain the confidence over time. So, if you know some software and you decide for another deployment, you might very likely use something you already know.

To my surprise, I have to say, on the last ‑‑ on 8th place on this slide, on the last stand, there was a history of CVEs. So apparently the history of the security issues in the product over its histories not the main factor, at least it's not among the main five most important factors.

And the second slide is the rest of the options we offered and the multiple choice section. I will not read all of them. But you can see that, you know, these are even less important than the history of CVEs. We will get to some of them in more detail later on.

Okay, so now we have selected our project, what software to use. Of course, supposedly we should be define the signatures before we do anything with it. So we have asked people how do you verify what we have got? Half of the people said we are relying on the package manager to do his job and verify the signatures. And then approximately a quarter of people go through all the trouble of actually verifying that the BGP, and other things matches what they should.

Now, we have the source code downloaded verified, we can have a look or not. It turns out that three quarters of the people don't really look at the code at all. And approximately a quarter of people looks at the code in some way, and that doesn't mean reading all of it but sometimes the people, for example, read the release notes and then if the new version and the old version I have looked at the changes between the notes in the documentation and the changes in code, something like that.

Of course now we have the code. The question is how do you install it? And next, basically kind of matches what we saw in the previous answers. Half of the people install packages from the operating system and another quarter approximately installs the binary packages from some source. And like, around 18 percent installs from the source code.

Next question is okay, you finally installed your software but eventually you need to upgrade it. As we know on the upgrade something can break. The question is what do you do, how do you mitigate the risk that it the blow up after the upgrade? It's quite nice to say that approximately two thirds of respondents said that they have an ability to roll back very easily. So they are not super concerned. That's nice. I would like to have it at a hundred percent but two thirds is not bad. Then half of respondents said that they do their own tests before deploying. That's also good, I would say.

Again, all the details are in the URL in slides, we don't have time to read all of it.

And now we are getting to the question, how do you test before production? Again, half of the people indicated that basically they don't because so far so good. And I think that's also interesting because it tells us that the quality of the random software you find on the Internet is no so bad because it doesn't explode every time you operate. So half of the people don't feel the need to actually run the test before upgrading because apparently it's working so far.

And then you can read for yourself that the other half of people test the software in some way. And just around 10% of respondents bother to actually run the unit test included in the software, maybe because, you know, the developers are supposed to do that so why would they run it again. It makes sense.

With that, if that's a brief overview of the survey. Now we will fly through the BIND 9 DNS project and that will serve as a comparison, like reality check, what the survey says and what the project does. And that will be more interesting than I expected when I promised the topic.

Quick recap. I'm not sure how many DNS people we have in the room, but BIND version 9 is 26 years old code base. So that's really a project with some history. It has over a quarter of million lines of C excluding comments and write paste and so on. A complicated system. Over 50 people are going to reboot it and still have at least one line in the GIT today. A complicated configuration, don't get me started on that. Over the years it gathered on average five CV assay signed on average every year throughout its history. Which might be good because the people apparently don't care that much. (Assigned)
.
So the question is, I want to use a DNS server for my critical application and this is a super large, super old code base, how can I trust it it? And the answer is basically you can't because there is no way for anyone to read all the code. And we, at ICS are thinking that well, we know the code but, you know, there are dark coherence and we should be, we would have better (corners) feeling if we had somebody reading the code and seeing what's hideing in there but maybe we have blind spots and not see stuff. So if we contacted an auditing firm which previously worked on Unbound RIS project so they had some exposure before. You can read all this on the slides.

Anyway, after a couple of months of work, they found lots of things of various severity, one of them raised to the level of actual CV E, but there is lots of low level bugs like BGP overflow because BIND is in C so it's easy to make bugs like that.

So we are kind of happy that they didn't find like clearing hall, except that it went under the audit limits, because these auditors were focusing on the lower level things, the usage of the C language, how it interacts with operating system and so on, but while the audit was running, we have got seven VV E as for BIND, some of them are not BIND specific because there is a generic problem in the DNS or DNSSEC protocol. Some of them are BIND specific. And the difference is the other tests were focusing on low level usage of the language and the machine, but all these bugs on the slide were actually high level problems in a sense that some sort of limit on resource consumption was missing, or there was a logic bug by allowed the attacker to crash the server. Things like that.

Maybe a good news for the BIND project is that apparently people don't consider this kind of problem to be top priority in the top five. So that's the comparison with the survey. You see like one quarter of people consider CVEs the most important, one the most important factors. So, maybe that's why people still don't use BIND.

Anyway. Of course, inspecting the old code or doing audits this kind kind of post fact exercise, you already have the code done, but we would like to not introduce the bugs at all, preferably, so we have coding standards, how we are supposed to make new features, how we are supposed to make modifications, and also like review where they are supposed to check and lots and lots and lots of other non‑technical requirements which are kind of self imposed. I will not go through the details because it might be boring for technical people here, but if you are interested, you can have a look at the slides. The links are here, the blue text is clickable and the software quality batch will lead you to the documents which list non‑technical things which are considered important for the quality of the project. You can again security issues. We have like policy, how you handle them, how we score them and so on and so forth. In short, it's a tonne of work which is totally invisible for anyone not working in the core team because if you open the source repository, you will not see a commit saying well you spent the week analysing the CV E, because it's not visible in the code, but it's a lot of work.

And funny enough, according to the survey, it's again something people don't really care about. Because according to the survey, only 11% of respondents considered the way how the software is developed an important factor when selecting the project. That was kind of eye opening for me.

And just briefly. In an attempt to prevent bugs from rent erring the code base, we do a code review, and BIND is kind of specific in a sense that we don't really get external contributions. There is very few external contributions over time and they are usually very small. So, in essence, it's one core developer reviewing work of the other core developing and it's like literally a review. We have a bunch of automated tests, because humans are not really good at catching some types of bugs. And performance test and what not on site.

We kind of take pride in our test coverage, and the project literally has like 12,000 assertions in an attempt to find the allocate bug. So if something unexpected happens in the code base, it will supposedly hit an assertion and crash the server, which is obviously a denial of service attack. But this was thought as a trade‑off that we either crash the server instead of allowing the unexpected state to propagate throughout the system and possibly core up the data or whatever. So this is again a debatable decision of the project, let's put it that way.

Continuous integration. Nowadays everyone has that. Basically, all of us, we have on multiful architectures to catch architecture specific and distribution bugs, they still happen nowadays.

And again, it turns out that we go through a lot of trouble, end users don't care, because according to the survey, only 13 percent of users consider the projects and coverage as an important factor when deciding whether the project is like worth using for a critical application.

Peer review. What can I say? Someone has to read the code. In more practical terms it means someone goes to GitLab, reads the code and then once the reviewer is satisfied, he clicks the button at approve, seal of approval, good to go and then release.

Here is ago, if we went through all these checks and reviews and release process, just to find out that a couple of hours after releasing the new version, someone complained that it's kind of broken in sort of a weird way. And then we found out that we just forgot the W in a conversion table. And the automated test didn't catch it, because it manifested itself in a, not weird, but like a combination of factors which was to the covered in the test. It had to be capital W, it had to be DNS wild card at the same time. And we have tested for wild cards of course, but they didn't have that specific combination, so that again an illustration of limits of the automated testing.

And the peer review kind of also has its own limits, because of course developers like to read the code and think about like the weird corner cases but very rarely anyone bothers to read all the tables and it's queried, it must be fine. Much but the English alphabet has 26 characters, and making a square out of the 26 is kind of hard.

After fixing the bug, we had to, you know, respin the release to basically though away the old version number, well new and now old version number, and start over. And in case of BIND, that's kind of intricate process which makes me crazy every time I have to do it. But it was down to something like check what we are going to release, write docs, that's something that's in first place in the survey, first thing we are doing that end users care about. Random test, no one cares about again. Generate a tarball, we'll talk about that and the packages and so on.

I think the most interesting part of the RIS process is the produceability. Funny enough we have seen in the X Z project compromised that the tarball not necessary matches what's inside the GitHub repository, Gwen I am kind of proud that we thought about that problem before it became publicly known. So we have a script which takes the GitHub repository, specific take from the repositorynd a the tarball and does a cross‑check whether it matches or not. Before we sign the release we match ‑‑ we check that the tarball actually matches what's in GIT and we can check it again. It has 100 lines, so anyone can go and actually see it because check it for with the naked eyes because the hundred lines includes Docker file so basically stuff packages you need, so it's not hard to check, so I find it interesting.

And then signing of course, we don't want to sign random things with our signing keys and we don't anyone stealing the key without us knowing. So the signing key S every person has its own hardware token with a key inside. The key is not exportable. And if you want to make a signature, you have to press the button so you know that you have actually made a signature. So, that's another like layer of defence against like signing something which don't want to sign.

But again it's a lot of trouble to go through, but no one really cares, not no one, let me correct myself. It's like a quarter of people actually care about this like signatures made through this complicated process.

And packages, that's again a lot of unthankful work. We do them for Debian, you name it. But again there is kind of slight disconnect with the reality, because we spend some energy on it but less than one third of users actually care about the binary packages that were used by us because half of the users takes binary packages from the operating system and we have no influence over that.

So, this is the last slide I have. And it's kind of summarising what we have seen in the survey and what we do in the BIND DNS project in particular. And for me it's kind of disturbing to see that we invested a lot of energy to like processes around security issues and how we review stuff and how we test stuff, but according to the survey, if we are going to believe it, it's not anywhere close to the top priority because basically the top five priorities were all non‑technical, these were like documentation, community, policing and so on, but all these top priorities were non‑technical, and I see strong disconnect between what we do and what the survey says. Now the question is, for example:
Is this because all the users like have implicitly assumed that the projects do that? So we have to keep doing that even though the survey doesn't indicate it, or is it something else? And hopefully, we will have time for a discussion because this is end of the slides I have, and I very much hope that you will express your opinions on what you are expecting the projects to do, like kind of implicitly and don't seen say that's allowed in a survey or are we insane and should we stop doing them or... you name is.

MARTIN WINTER: Thank you. The queue is open. We have quite a bit of time left. So bring up your questions.

AUDIENCE SPEAKER: Aleksi. I think there is kind of here misunderstanding. Do my users really care if I'm doing uni tests? No they don't because they want the tool to be up and running? Do I care as a developer? Of course because if I want to do any factor erring there is very little chance that uni test will pass until I will fix all the places where this old way of coding convention were used. So, the square doesn't ‑‑ demonstration doesn't mean that you are doing something wrong. It just means that your target audience (deregulation re). Do I want at that consumer to run union test? Of course not. Why should I spend time on it? Maybe a few suggestions about basically a packaging and operation system, what to me is a huge pain, especially Debian, they like to do own patches.

If upstream is cooperating with my operating system, to me it sound more important and more productive than when you just publishing RPM repositories on your website because nobody would use them usually. It can cause all kind of trouble, if you are working with upstream and ensure that patches you are awing makes sense, that versions you are you using makes sense etc., etc., it helps a lot. And the last point is probably about CV E. I think it is a different kind of help. It's fundamentally broken here. I think that most of the CVEs is a high or critical rating. This was recently pointed out by cruel after who has a broad sum absolutely minor issues is critical and high, he spent basically months as to lower the priorities. And to me, the situation is kind of fear the strange, we are putting a lot of energy then, you are getting this crap in the databases, you are spending even more energy on T I don't know what to do there.

AUDIENCE SPEAKER: Marcos Sanz here from DE‑CIX. Thank you very much for organising the survey, which I think it was a great thing, I participated in it also myself and I enjoyed it. We shouldn't draw the wrong conclusions here, looking at this table. It's not that because people put the automated in priority and you should stop doing it. It's just that would be the wrong conclusion. I'm not going to run your uni test because I don't expect you to deliver something which does not past the uni test. I am not reviewing your code because I expect that you have a process where people review code. So, in we are in a kind of compliment situation where you do something (Clement re) some things and you are the expert on your side and I am the expert on my side. So, (I think you are doing it right. I am very happy you display openfully about the you are using for BIND. Keep up the good work.

PETR SPACEK: A clarifying question to your comment. Does it mean that this survey question says, or do you interpret it in a way that this is like about the user visible stuff? I mean, because it makes ‑‑ you know, this question specifically makes like the recommendation and the things you interact with daily and also like the behind the scenes things. Like the development process and so on. So, do you think that it means that the top priority for users is top priority because they do it everyday, and they just don't like even think about the development processes, is that what you are saying?

AUDIENCE SPEAKER: That's a very nice of putting it. I hadn't thought of that myself. But yeah, I think there is some truth to it.

AUDIENCE SPEAKER: Jim Reid. Marco summed up the situation fairly well.
In terms of doing testing or evaluation of BIND 9 releases, I don't bother, quite frankly, because I trust that the IFC who has got a great pool of talented engineers, I know they are going to turn out code which most of the time is in excellent shape, we get an occasional bug here or there, so what, there is always bugs in software. So I don't need to do code reviews or anything like that or any kind of personal regression tests and I can hunt you down an IETF meeting or a RIPE meeting if ever a problem arises so I don't bother checking and I think many other people in the room would probably have a similar approach to that. Where I think the probable lies in my view is not so much with the software that you are producing, it's what people then do with it later. I'm thinking particularly about the people who are distributing packages for various other OpenSource operating systems. And I suspect the level of quality assurance and testing that they do maybe isn't all that great and they may all do the same thing as I do say oh this is the laysest release, they have tested, I don't need to worry any more, I don't need to have any kind of regression testing of my own before I throw over the wall in the public distributions. I think that might be something for you and your colleagues to look at. Have a conversation with these people to see what kind of testing they do and how well it matches up with what you are doing.
.
I think another problem we have got is not so much with the software itself, but it's with the ancillary stuff that you rely on, I am thinking particularly of OpenSSL. So you have got all the problems that this then creates which are outside your control but you are constrained because you need to use that crypto library for all these other things and we get situations like we saw before long ago with OpenSSL with the heartbeat thing, like ten or fifteen years ago and then we recently saw something with the X Z compression stuff. It didn't affect BIND but it's an indication of the problem we have generically in the OpenSource world because the amount of testing that might be done by someone who was producing the software may not be all that great. So some of these packages are one person working on it in their spare time so they are constrained with what they can do from their own resources. Then of course normally checking and looking over that stuff before they though it over the wall into a release. That's another part of the problem.

The final point I want to make it this is a great presentation and I love everything you have had to say. It would be great if you could write this up and put it up as a blog post. This is how we test and do QA on our releases before they get shipped. I think that would be a wonderful thing to do and it may also encourage others to do the same thing. Thanks.

PETR SPACEK: Thank you for the idea. I think I might need to the stenographers because that might be saving me tonnes of works.

AUDIENCE SPEAKER: What happens is, if you are actually developing an OpenSource application but think about the users of it. Who are actually using it. Those are actually V G admins or network admins, who you are mainly concerned about the operation of the application rather than the internals of the application and for them, definitely documentation is much more necessary compared to others. They don't need to go through the CA process, the continuous testing process and anything else. So definitely, and they are not actually definitely take care about the upcoming CVEs or anything in the mailing list compared to what happened a couple of years ago because the software development paragraph paradigm had shift add lot in the last 10 or 15 years in terms of security. Thank you.

AUDIENCE SPEAKER: William. I just wanted to say thank you to IFC for doing this work because it will also be very beneficial for us as a software vendor and that's really very valuable for not only makers of software but also the users because they will benefit by this as well. So, thanks. Excellent work.

PETR SPACEK: Thanks. I am starting to blush, stop it.

AUDIENCE SPEAKER: Plus one for that. I think you have done a great job, so keep it up. And I do think that, that running into the same problem that the car industry has, right, so, people care about the, you know, whether they have got the TV screen in the back to keep the kids amused because everybody has figured out how the seat belts work and nobody tends to worry about that. So with the exception of Volvo drivers nobody seems to ask about, they assume that you have taken care of the security problem. So...

PETR SPACEK: That makes sense.

AUDIENCE SPEAKER: Just keep up making sure that the seat belts work.

PETR SPACEK: That also makes sense.

MARTIN WINTER: Just a quick reminder, please state your name and affiliation when you come up to the microphone. No more questions... thank you very much for your presentation.

PETR SPACEK: I would like to kick out one more discussion point if we have the time, if it's okay. Thank you.

I forgot who, maybe Jim, mentioned that the trouble might be the distribution because people use a tarball, then the tarball gets included in Debian, Ubuntu, you name it, and then some distributions decided okay we are not going to change the tarball ever again during the lifetime of the particular distribution, but the question is what do we think about the model where the distribution is applying hand selected patches to the tarball as opposed to upgrading to the new, you know, minor version of that same software, because different people have different opinions on that (tarball) there is the school of thought which says okay if you hand pick the patches and apply them one by one you get like better qualities, of tarball because over time, the important bugs are essentiallily fixed and you have a stable version which can not possibly introduce breaking change because you are not upgrading really to. Or the second school of thought is, well, if you apply a patch, you just created a new unique version so it's just you running that unique version with the patch but the rest of the community is running something a little bit different, so now you might have new security issue because you have missed one of these important patches or something. And we have reported like a couple of minutes for discussions I would like to hear opinions on that what people think, and if you agree or disagree, what's your distribution doing and whether you, you know, have, see a way how to influence what the distributions do or whether, what they are doing like Debian and Ubuntu and red head and so on is okay and they should keep doing that. I would like to have some discussion if we have still time for it.

AUDIENCE SPEAKER: I would like to start, a Ubuntu developing I hate this practice, because we constantly every month in our mailing list getting reports from some modern operating system using some ancient version of tools and it's not even this ancient version it's strange inaudible because they applied some of the patches, some they forgot, also the developer knows better than I am a core developer of 20 years for this project what he needs to apply. So don't do it. And also I think it was making kind of sense when we were living in a world of of Redhot 6 and very, very stable operating system which everyone was afraid to touch it, right. Now we are living in containers world where things are super dynamic where you can easily upgrade using GIT flow many times per day, you can release often and quickly. So one the things which is kind of educates an exception is patches specific for distribution and typically they are very small. Like unique scripts and I am fine with this. The only thing I feel Bru ssh in Mac have a very good practice. If you want to apply any patch show that you stride up to stream it and show it that the vendor told this patch doesn't make sense to be in the project it needs to be outside or he accepted it but it's not yet released. Then you can apply t I think it's the best way to handle.

PETR SPACEK: That's an interesting approach.

AUDIENCE SPEAKER: And Roy, Debian developer since 1999. (Ondrej) I think it's a fallacy to think that it helps to keep a stable software with patches. Of course, it creates some kind of problems. If things could break if you upgrade to the latest binary version, but for the complex software, it's a fallacy that it helps. So, for stuff I maintain in Debian, and BIND and TCP, I actually made a deal with the release team and the secretary team that we push the latest minor versions to Debian. Ubuntu doesn't do that, and we see that it creates a problem with flex software. It is a problematic for stuff like BIND or DHCP or anything that's more complex than I don't know, the fortune programme.

PETR SPACEK: If I'm correct, Firefox or web browsers with being upgraded in all the distributions. There is no distribution which patches Firefox.

AUDIENCE SPEAKER: Yeah, I think so.

AUDIENCE SPEAKER: Farrey a, BIRD, cz.nic:
We are doing the same with the BIRD, we are pushing the maintainers in the distributions if they even try to put there some patches to actually upstream them, we are actively trying to pull those patches from the distributions and integrate them or conveyance those maintainers to actually (convince) remove those patches because these patches are bonkers. And it sometimes works and sometimes we just get angry with them and it doesn't work.

And I really must agree very much with although who say that software is complex and applying random patches is just a no‑go. It may kill the whole performance, it may add other security overhauls. It's just a whole mess.

MARTIN WINTER: We close the queue after Shane.

SHANE KERR: So, my understanding is that there are organisations that have policies about version numbers that are very strict. Which is probably encouraged by enterprise Linux for example where you have versions which don't change for 13 years or something. And I know people working with these organisations don't care. They literally wouldn't care if you patched a third of the programme, as long as you kept the patch release version the same, right. It's the number that counts because that's what their senior vice‑president of keeping things the same is going to insist on. So I guess the question is: What to do about this. Like, if I'm an engineering working in some huge telco that has this policy, I have no power to change this. I can complain to my boss and they can complain to their boss, but probably it's not going to get changed. So as on OpenSource community, what's the best thing? Now one approach is the sort of accelerationism approach, are we just going to do like you are saying, run the Docker things, keep things dynamic and if you are in a company that refuse to upgrade versions you are going to end up with insecure buggy stuff and we just say let those companies burn. I don't know if that's the right approach or not. But I don't really know any other approach other than to sort of keep letting people backport patches and having to deem with supports, so I don't know if there is a good solution here (deal) maybe people smarter than me have a good suggestion for it. But... that's it.

PETR SPACEK: Maybe if we have our own packages which always have the same version number and change only the number after the dash but in fact it's like a new version. Whatever, I am joking.

SHANE KERR: You are joking, but like, if you put up a web page which said this is exactly the same as this other one but we pre‑review is the version number for you so you can convince your boss it's okay. (Preserve).
.
(Applause.



MARTIN WINTER: Next up we are Maria, she is following up on the contribution policy we had last Working Group.

MARIA MATEJKA: I have like 11 seconds. That was better, thank you.

First of all, hello, I am Maria, I am developing BIRD in cz.nic but this presentation is not in the cz.nic template as you are used to seeing because it's quite a joint work. We did a lot with Valerie and I I didn't like it would be fine to use the cz.nic logo on a thing which was actually not done on behalf of cz.nic. It was partially on my part but not on her.

Well, I am following up on, after the last session, where we ranted about how the contributors are sending crappy patches, and how the maintainers are dismissing the credits for the maintainers ‑‑ from the contributors. So, we tried to do something with it.

So, what we created is a template for a contributions and credits policy, and this thing, if you look into BIRD, into the file contributing MD, it says how we are going to handle your contribution, and we are going to assign credits for it. And if there is an edge case, if something is strange, there are some guidelines how we are going to work with it.

So, you are maintaining some OpenSource projects. Some of you at least. And I'd like to say your project needs your contributing policy. At least if you want people to contribute, because they should now how to contribute. And they should know, and this is maybe more important, they should now how not to contribute. There are situations which you don't like. You don't like contributions where somebody just sends a 1,000 line patch with a refactoring. Or you want those. I don't know. I don't know. I don't want them.

And you'd like to tell them how you are going to handle that. So, we are explicitly stating that if you send us an unsolicited refactoring patch, you are going to be rejected. And the goal is simple: We want to explicit list of expectations for everybody around. It's not about sending a set of rules. It's nothing legal. We are trying to set expectations upfront. And this is the main goal.

So, how you can get a policy. It's quite easy. There is a link if you download the presentation, it should be clickable actually, I hope that it's clickable.

There is a GitHub repository. There is a guide. There is kind of a template and you just choose what fits from that template for your project. No project is identical. You would not contribute the same way to BIRD to S Q light or to Linux kernel, and you would not contribute the same way to a random software well to a random firmware released by a shady company just to make it OpenSource but actually not expecting any contributions at all.

There are projects with existing policies. You can look at those if you are missing some project in the list, there is a file in the repository. You can just edit it and you can send us a pull request. Please do it. If you have a project with existing contribution policy, please send us a pull request.

Also, this policy guide has a contributions policy. So, please read it before contributing.

How to create the policy: It can be a bit difficult if you are doing it from scratch. So, there is an introduction in the guide. There are some reasons why to have a policy. There are some situations which different people came across. We were somehow collecting this from the last discussion, from the discussion on the mailing list and from some other people around. And you should think about what you are doing now. To have a contributions policy doesn't mean that you have to change yourself. It's a request to describe yourself. And to describe how you are behaving right now.

So, you just write there what you like and what you don't like. You just write there what you expect from your contributors. And this is a bit worse but you may need t you may need to create your internal policy rules to say how you are going to implement the policy actually. But please keep in mind that the policy should help you. It's a guideline for yourself, and it's a place where you can point people to. You can tell them please read the contributions policy, your contribution is violating these point and we have set those points to make the maintenance easier for us and to make it easier for you and to make it easier for all the users to actually see trans apparently what's going on. So if I say I'm not accepting patches with commit messages that are just fixing a bug or something like that, I should write it into the policy and then I should enforce it actually (transparently) so this is what I did like several hours ago, I responded to somebody, please, I can't actually understand what you are suggesting. I know that your contribution is a four line page, but I refuse to read it, I refuse to do the code review until you tell me what you are expecting from me, until you describe what you have actually done. Because it's half of the work. It's actually more than half of the work to actually document what you have done. Sometimes it takes several hours to write one single comment message to actually describe it correctly. And this is why we have this in the policy.

I'd like to say we expect your contributions to the policy guide. Please read it, please look at those situations, look at those examples which you can have in your contribution policy, and if you are missing something, if something is bad, if something is wrongly formulated, please suggest various ‑‑ you can just send us a pull request and we are going to handle it.

Please read our contributions and credits meta‑policy. And I would like to say we are going to credit you accordingly.

So, what's our policy? We will come good faith contribution to this policy, which means also please don't send us air generated text. Just well you can send us the urgent rated text as a new contribution. Please don't pass the whole guide through an AI to make it better. That's a no‑go. That's a refactoring.

Send a full request and your name to the credits file if something breaks, contact us.
.
And the last thing is something you should have as well. We aim to respond to you in one month. It's an expected time frame in which you get the answer, which should also say if you don't get an answer within one month, please ping us. We are just people, we can lose it. It's not that we want to GOST you, we just want to set an expectation and if this expectation is broken, please say that the expectation was broken. This policy applies to itself. I always love the recursion:
.
I got asked to suggest a BGP. This is a crude draft. I just drafted somewhere in the hallway. And it's like that OpenSource projects should have a contribution and credits policy. And they should be quite open. And that the OpenSource projects should actually use the policy and adhere to it. And that they can use a policy guide for this.

For this suggestion, I expect you to tell me what should be there actually, it's the discussion we can speak about it more, whether it's a good idea to have a BGP or not.

And I should add some remarks. This is a continuation of my and volume re's talks and some other work (Valerie) in the mailing list. I cannot also note that this work on the mailing list was happening at the exact same time as there was the beef about the Working Group Chair selection, and no single person of this newcomer voters ever commented.

There were some notable contributions from Martin Winter and Marcos Sanz.

And that's basically all. And there is time to discuss.

SASHA ROMIJN: Thank you very much, Maria.

(Applause)
.
I actually still have projects without a contribution policy, but I have some new ideas that we need to discuss later. But for now, let's here from Shane.

SHANE KERR: I actually have a question about projects which don't have a policy. So, do you have a recommendation for going back and adding documentation about previous contributors and things like that if you didn't keep track? Like, is it a good idea to go through your GIT history or maybe migrate it into GIT?

MARIA MATEJKA: It depends. Sorry, I am kind of a lawyer in my back and if you ask two lawyers, you get three opinions. It depends. You may be able to track the history somehow. You may try to reconstruct it, and you may just say: Well, from there is a history which we couldn't uncover properly, so, we, instead of that, said everybody who contributed before this time, is just a history and now we are going to track it properly, or you can say we tried to reconstruct it as well as we could, but if anybody feels that they should be there and they aren't there, please send us a pull request. It also works. It's just about whether the information is and how much time it takes. (Where).

SHANE KERR: That's fair. Thank you.

SASHA ROMIJN: Anyone else with comments. Also on the proposed BGP text that you put up before: So, this of course will have proper discussion on this on the mailing list. But, if there are no other comments, then I would like to thank you and of course Valerie who is not here, but started this effort at the last RIPE meeting, very much for your work on this. And will continue the work on the BGP with the community on the list. (We'll).

MARIA MATEJKA: Thank you and I will send this exact things to the mailing list to further discussion and let's see what we come up with. Thank you.

(Applause)

SASHA ROMIJN: Next Martin, with a lightning talk on road blocks to OpenSource in Asia Pacific.

MARTIN WINTER: So for the ones who know me better I used to live a few years in Asia Pacific so I had a lot of cushion there with what people are using. And this is like based on a BoF I did in Bangkok. So that was always my impression for a long time is when you look at the different countries, if your economy is like doing worse, you are like have cheap salaries, you don't have much money, you are more likely using commercial product. If you are a well developed country, you are more likely using OpenSource. And it's completely wrong. That was my personal views. I was looking is that actually a fact..
.
So there is somebody who made a statistic about OpenSource contributions on a per cap. I listed here the top three, which is all in Europe. All countries who could afford to buy commercial software. And then I listed all the other ones which I found from the Asia Pacific area. And when you look in there, you notice countries like New Zealand, Singapore, Australia, Taiwan, all the better developed country but all the other countries, Thailand, Cambodia, Malaysia and all those, they are just missing there.

So, I wanted to try to figure out why is that, I wanted to have more discussion with people, I did the BoF, a go a lot of hallway talks at the APRICOT meeting, and it was quite interesting and I want to show here the results. And see if you have any ideas.

So, a lot of it I got indirect feedback just to keep in mind, like corruption, nobody wanted to admit that he is actively like interested on it, but I heard a lot of them saying yeah it may be not us but other ones are here because if you buy from a commercial company, the management is used to getting a kickback. Obviously if you get things for free OpenSource, there is no kickback involved and they are not really interested.

I also heard a very interesting case that was specifically in Cambodia where if you have any network equipment that goes down, or even if you would run like BIRD for routing stuff, that would basically count as a router and would need to have a certification. And the certification basically makes it impossible to run there unless you do that, and talking in more detail, it actually only allowed really routers for them because war wee was corresponding the bill with the government in actually figures out what needs to be in there. So, you have a lot of things, so you have corruption direct at company level but the in the government level and a lot of companies in come bode an ignored that rule but some of them actual said I don't want to ignore it because the government could put us out of business if somebody else is not like that.

So I have no idea how to fix that to be honest.

Then the not familiar with it, I heard a few times. Obviously the community is not that strong so a lot of times people say like, we might be interested but I don't know anyone else using it. We haven't heard any training. So I think if here, if you are in OpenSource I think you could probably do quite a bit. Show up somewhere if you can in Asia Pacific, maybe give some training there. And in general, they only get training on commercial product.

I also had a discussion that people are a bit scared of doing things together. So you know I work on frrouting, telling them just take a PC and load frrouting, they were saying oh how big does the PC need to be? How much is the customise? So they would be interested in can I get the complete solution. Like an IOS or something where they keen get with the hardware together in one place.

So that was a lot of feedback that, that it seems to be quite a bit easier, so if you have a product, maybe at least document specific hardware how you can do that.

And then risk of failures, it seems to be still quite common that people think it's OpenSource I get no support. The company buys the support, the support may be horrible but at least I have somebody to call. Most of them I tell them in the a lot of OpenSource projects you can get paid support too, many times even from more than one source. What I tried to tell them, hire a few local university student, train them they can do the first few levels, you can still have a backup of some other support. But a lot of them are still obviously scared if things go wrong, it's their head. If they buy Cisco or something like that, then they can blame the other ones.

It's unfortunately only a lightning talk so no time for questions, but if you have ideas, comments, please see me somewhere in the hallway how will way, especially if there is a way you think to get the community a bit more on the OpenSource train. Thank you.

(Applause (n
.

SASHA ROMIJN: That you know very much Martin for this talknd you know where to find Martin if you have input on this subject. And for our last lightning talk I would like to ask Gergana to talk about this year's RIPE NCC community project funds.

GERGANA PETROVA: I am a RIPE community development manager. And I'm going to make a very short presentation about the RIPE community project fund.

Probably a lot of you are familiar with this already. So, I will run through it quickly. My main message here is please spread the word about this initiative. What is the Community Projects Fund about? For many many years RIPE NCC has been supporting projects that I would describe to the good of the Internet. That was the expression that we use back then, in 2017 we decided to create the process around these types of funding, so in 2017 he, we created officially a programme called the Community Projects Fund. The idea behind the project was to distribute €250,000 a year, that's combined, to projects that are of benefit to the community.

So far, we have funded 35 projects. And we have the call launched for this year's projects that I would like to ask your help in spreading the word for.

How we do this? Once we qualify this project, we also enlisted the help of community volunteers, the Selection Committee, they go over the project, you can see their names on the slide. So they go over the projects that we receive, and they give us their recommendation for what we should fund, and they also give us an estimation of the how the financial amount should be distributed.

Every year, we, toward the end of the year we also open a call for volunteers, so if you are interested in taking part in the Selection Committee, consider applying for that as well.

What is the selection process? So, as I mentioned, this committee of external volunteers are the ones that evaluate the projects. To do this, they have six main categories. You can see them on the screen. So the quality of the plan and the approach, the team, diversity, like what sort of applications we have received, geographic diversity as well. Innovation, knowledge sharing and general interest and the impact of the project that is going to have on the community and maybe even globally.

So, after the obligations, so each individual committee member creates the applications independently, we have a submission system where we can do that. And then the top ten projects that each Selection Committee member has (individual) selected, get pooled into a short list. Then we gather them together and they discuss the short list of the best projects. And then they make the final determination. So this is entirely done by not the RIPE NCC staff. You saw the list of them and also there is one rotating representative from the Executive Board.

This is a rough timeline of how the project is organised. So, every year, usually in early spring, we launch the call for applications. This year that call is going to be open for four months. So we have extended the duration of the call just to give people the chance to really think about the project, ask any questions if they want to, and to allow more people to submit.

And then, so after the four month, so we launched early March. So 31 July we close the call, and then the Selection Committee has just over a month to go over the applications as I explained the process, and then do the three round. So first reviewing individually, then creating a short list and then making the final selection.

And then in the first step, we ‑‑ after they do that, we contact of course the selected projects, we need a few weeks there to make sure like the project is still going on, they are still interested in the funding, here is the contract, sign the contract and everything. So once that is done, we're ready to announce the officially announce the selected projects.

And then usually by the end of that calendar year, we would also distribute the funding to the project. So we have already finalised the contracts and we're ready to distribute the fund.

And then it's the next calendar year that the majority of the work on the project is taking place. So, this year, 2024, the projects that we selected in 2023, they are currently working on them. So, six months after the start of the new calendar year, usually we have reporting from the projects we selected. So that's intermediary reporting that they submit just to us. Sometimes we could also, you know, we have our own researching house, sometimes we you could provide some guidance. Then a year later we expect that those projects submit their final Labs article. And also, if they want they can also make a presentation but the Labs article is usually what the minimum required from the project.

Here are some examples of previously fundered projects, as I mentioned it's 35, so that's a really long list. I think it's particularly relevant for this Working Group to think about this. You can find more at the URL listed. We have selected the projects per year. You can, on those pages you can read a little bit more information about the project, the country where the project is coming from, the amount distributed to each project and also links to either labs articles or to the website of that project.

In addition, if you are considering, for example, submitting something and you are want to hear more details, we have had Open Houses where the selected participants would present the project. We have had three Open Houses last year. The recordings are available online and we also had two Open Houses this year, also the are recordings are available online.

So this is all I wanted to share. The call for applications closes 31 July. You can see the URL on the screen and if you have any questions, my colleague al is in charge of this project

MARTIN WINTER: We are out of time.

(Applause)
.
So, yeah, we are coming to the end of the session. Please, please rate the talks. Log in, go to the programme there and rate the talks so we get a bit of ideas what you want to see and how much you like these presentations. And with that, that was the end of the session and thank you everyone for attending. See you again at the next RIPE meeting for this.

(Coffee break)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.