RIPE 88

Archives

MAT Working Group session
RIPE 88
21 May 2024
4 p.m.

STEPHEN STROWES: Good afternoon. We're going to start in one minute, if you want to grab a seat.
That worked wonders. This is like a class of school children and everybody is now staring patiently waiting for the teacher to start. I'm going to do this in future. Good afternoon and welcome to the RIPE 88 MAT Working Group session.

Welcome. Welcome to Krakow. I am Stephen. I work Forfasly, I am joined by my co‑chair Massimo who works for NC T, one the largest Tier 1 providers, and the other co‑chair who isn't here is Nina, who works for Kensic, one the leading network observability platforms. And we have a really interesting agenda for you today with five different talks. We are going to be touching on aspects of inter‑domain routing, Whois RIPE Atlas usage platforms and Starlink constellations. If you enjoy following the chat or engaging in backchat err, please join the Meetecho. We will also be monitoring the Q&A and Meetecho for questions, we will relay those to the audience. Please also remember to rate the presentations, as we go. We do actually find that feedback useful if you like or dislike some of the content that we're bringing in.

So, without further ado, we are going to get started. Our first presenter is Savvas, and he is a pH student at LAN castor university who is specialising in internet measurement, determined routing modelling and traffic misdirection attacks.

SAVVAS KASTANAKI: Thank you for the introduction. First of all, thank you for accepting me in this meeting, it's my first time here. Everything is so well organised and the vibe is so friendly, it seems like you are not network operators. I am a PLD student in the school of computing and communications in LAN castor university. Under the supervision of professor mire as. This is a joint work, in which we replicated 20 years on paper published an IMC on inter‑domain routing policies.

Now, among the other things that we're going to do in this presentation, we try to answer four specific questions: Why would a network operator want to model the inter‑domain routing system in the first place? Whether the cornerstone model actually captures today's inter‑domain routing policies. What is the phenomenon of collecting announcements and why should we even care about them? And finally what can be done to enhance our capabilities either as researchers or network operators.

Before jumping into the main point of this presentation, I'm going to explain a few basic things on how the inter‑domain routing system operates. So the Internet is a network of Internets. These are called autonomous systems and these form business relationships between them because they want to exchange traffic between their networks. Even though these business relationships can be very complex they categorised into two types. In the provider to customer relationship, and in the peer‑to‑peer relationship.

In the provider to customer relationship, a customer AS pays a better connected provider to transit their traffic to the rest of the Internet. While in a peer to peer relationship, two autonomous systems exchange traffic for free.

Apart from the connectivity aspect that business relationships give us to understand the inter‑domain routing system, we also need to explain those traffic flow rules, the routing policies. Autonomous systems configure the routing policies primarily because they want to achieve their business goals. So for example, in this topology over here, Meta might want to propagate an announcement for a prefix to the provider cogent, but not do the same thing to their provider Lumen because for some reason they want to load‑balance the traffic or their costs.

Finally, autonomous systems are called autonomous because they independently define the routing policies without the need to globally coordinate with the remaining 75,000 or so autonomous systems on the Internet. And one last thing is that these routing policies are sensitive information so usually autonomous systems keep them secret.

This slide over here, if you had to summarise it in a sentence, I would like to say that connectivity on the inter‑domain routing system, or in other words business relationships, does not imply its ability. Because Meta is connected with Lumen but for some reason they do not propagate all the prefix announcements.

Now, let's assume that I'm a network operator and I want to model the domain router system. Why would I need to do that in the first place? There are many reasons. One would be to see the side effects of a traffic engineering strategy that I might follow. Or I'm doing some routing policy changes and I want to see how that affects my network. I want to see what would happen in terms of resilience if a large DDoS attack happened in my network, or I would like to find the best geographic location to install the next data centre or the next point of presence.

So, there are many reasons why a network operator would like to model the inter‑domain routing system. And modelling the inter‑domain routing system is a subject that has been studied thoroughly the last 25 years. It has been formed as a physics problem, as a statistics problem, as a graph theory, learning system optimisation issue. Nuns the cornerstone model was proposed in 2021 by a in which they described a set of rules that needed to be followed and met so that the GAO routing system converges to a safe state. And that work was primarily based on business relationships among the ASes. Two years later they observed the Internet routing policies at the time and they saw whether the ASes of that time obey the could and recognise forward rules.

Now, the sense of the network operators over the last 20 years primarily focused on producing and replicating the GAO and recognise forward model and to the best of our knowledge we're the first to fully produce this complimentary piece of work to the cornerstone model in an effort to try and understand and answer whether today's routing policies follow this model. This is a replication study. So we follow the exact same methodology which they followed in the IMC 2003 paper. I'm not going to say a lot here. I'll dive into details later in the presentation.

But, what we tried to do is a two‑phase study. Infer the import routing policies on the on the other hand and develop the export routing policies of autonomous systems on the other hand.

Now, for the import policies, we need to define some terms. In the previous example Meta is a multihome customer to Lumen and Cojent and Meta announce as prefix they own to both of their providers. Lumen and cogent, since they have Meta as a customer, they are going to receive a customer route for that customer Meta prefix. Cogent decides to propagate the announcement to their peer. So Lumen also receives a peer route for the same Meta prefix so they now have two routes to the same prefix to decide. And finally, Akamai receives receives a route from the provider Lumen so they receive a provider header.

The most important thing that happens in the import policies phase is the assignment of a local preference value to those routes, and an autonomous system alliance a local preference value to indicate how favourable that route is. For example, Lumen needs to have a system to decide which route to follow to reach the destination. Gao‑Rexford back in 2021 described an ordering between those local preference assignments and they said that usually customer routes are going to be assigned the highest local preference values because customer traffic generates revenue for your network. Provider traffic includes a cost so provider routes are going to be assigned the lowest local preference values and since peer routes are neutral they are going to exist somewhere in the middle he will of this hierarchy.

One of the aims, one the goals of this work is to answer whether this assumption holds with fresh data, 2023 data, because as we said, this is 2023 work which we replicate.

To do so, we need to have the local preference assignments of an autonomous system on the one hand and the type of business relationships with its neighbours on the other hand. So to get our hands on local preference assignments, we are going to use looking glass server data as well as IRR data, and for the business relationships, we're going to use a state of the art AS relationships inferred by data.

Now let's directly jump to the results. We observed that the local preference allocation pattern is consistent with a Gao‑Rexford model in 83 percent of the cases on average. One in the original, one of the Wang‑Gao study was above 99%. More specifically, even though customers in provider routes are consistent with that, peer routes have a, are highly consistent with the model. This is probably due to better performance of the peer routes, or most probably because the flattening of the Internet high energy that happened during those 2 years made the emergence of the IXPs and CDNs, let's say, overtake the ISP's power back in 2003.

Now, this result, this number over here are constructed using looking glass servers which are interfaces either SSA or Telnet interfaces to routers, so you can see the actual control plane configuration. When you use data in local preference values from the IRR database, which is primarily maintained for the recommendation, you see a far more consistent view with the business relationships and the Gao recognise forward model.

Again, if I had to only say a sentence for this slide that low can go preference allocations today are not as heavily dependent to business relationships as they used to be back in 2003 hence we need to reconsider that cornerstone queue‑Rexford model.

Now one would argue that I'm using inferred AS relationships so this is a possible bottleneck in my analysis and that I am introducing some error. We also started the error introduced by leveraging BGP communities. And we observed that the error is negligible. The only thing left that we have in order to explain this inconsistency with the Gao‑Rexford model is probably the incapability of the model to capture the actual routing policies.

Now jumping to the second part of this presentation, the inference of the export policies. Things become a little bit more interesting here. We are going to follow the guideline of the original work. And we're going to assume that the provider network will announce no matter what all of its prefixes to its neighbours, because that's what the customer pays the provider to do so. However, a customer or an autonomous system may selectively announce its prefixes to its providers or peers because they want to load‑balance the traffic or handle their costs in a better way. And since there is not a specific value like local preference which can help us observe the export policies of an autonomous system, we are going to follow this guideline of the original paper and observe the export routing policies of autonomous systems through the routing tables of the neighbours.

Now, I'll give you a slight example of what a selective announcement can look like. In this example, Meta is multihomed to both Lumen and cogent and Meta has two prefixes. Now Meta announces both to both neighbours, prefix B 2, so both Lumen and cogent connects to this customer prefix through a customer route because it generates revenue for their company. However, Meta, for some reason, wants to only receive traffic for prefix P 1 through the ingress link with cogent. So they only POP great the announcement to cogent and cogent propagate to their peer Lumen. If everything was working on the Gao‑Rexford model, Lumen would follow a route to reach a customer. That's. In prefix P1 Lumen is going to follow a more expensive route, which is a peer route, to access that Meta prefix. So, if we need to give a definition, a selective announced prefix is a customer prefix received through a peer or a provider route. Also, an equity is there I have announced prefix can be a peer prefix received through a provider route but we'll only focus on customer prefixes in this presentation.

Now, to study the prevalence and the persistence of selective announcements, we need to have the routing tables of the neighbours of an autonomous system, as well as the inferred AS relationships with its neighbours. Now, what this table shows over here to the left column, you can find all of the vantage points which we used to that we can measure the prevalence of selective announcements in 2023. You can see that the top five networks are high centrality networks. And we observe a large portion of selective announcements through the total number of prefixes that they observe. And this slide wants to inform us that the phenomenon of selective announcements is still prevalent even 20 years after the first time it was introduced.

Due to time constraints, I won't say a lot in this CDF, but what I want to say here is that if you are a selective announce errand you announce selectively at least one IP prefix, then probably you advertise selectively 100 percent of your prefixes, because as you can see here, approximately 75 percent of the selective announcers announce exactly 100 percent of their prefixes selectively.

Leaving prevalence and going to persistence, we study how persistent are those selective announcements, and you can tell whether a selective announcement is persist aunt if it does not reach from selective to non‑selective in a specific time window. We use different time Windows. The time window of a day or a month and you can see on the left figure, that selective announcements are consistent during a month. The same figure you would see in the time wind owe of a day but on the right‑hand side, you see that elective announcements become unstable during a year, and that is a good indication that if you care about compiling selective announcements probably the best time window would be the time window of a month.
.
We take it one step further and we studied the persist ance of selective announcements over the last 20 years for five networks that exist in all of those 20 years, and even though you cannot see a clear pattern, you can observe a pronounced medium jump in 2007, and that medium stayed consistently above 25% for the remaining 20 years with the highest value of selective announcements being observed by Lumen in 2001 with a ratio of approximately 81%. So, selective announcements are sensitive to the topological and probably policy changes, so we need to regularly compute them.
.
Now, if I had to summarise in just one slide this presentation, I would like you to know that understanding, predicting and modelling the inter‑domain routing system is a crucial function not only for researchers like me, but also for network operators like you. Also, reachability on the inter‑domain routing system is not only determined by the connectivity we have with other autonomous systems and the business relationships we have with those autonomous systems, but also by the routing policies that they announce and configure. Between though we have the tools to observe those actual control plane configurations and those tools are looking glass servers, for some reason we don't use them.

We need to consider the Gao‑Rexford model and extent it with inference al techniques if we want to overcome this limitation. You'll find much more detail in the paper, which you can find in this slide. You can find the code of this paper in our GitHub repository and feel free to ask me any questions either now or through my e‑mail.
.
Finally before thanking you I would like you to know that I'm open to research and industrial positions. Thank you for your time to watch my presentation.

(Applause)

MASSIMO CANDELA: Thank you very much for the presentation. And if there are questions, please go to the mic. I see that there is already a queue forming.

STEPHEN STROWES: I am channelling Randy Bush on the Meetecho Q&A.

"Do you have comments on the 2013 paper, a survey of inter‑domain routing policies by Gill Shapira and Goldberg which measured the four models held for only about 65% of the relationships?"

SAVVAS KASTANAKI: If we're speaking about the same paper, that was a survey conducted on network operators. Can you repeat the title please.

STEPHEN STROWES: The title of the paper is "A Survey of inter‑domain routing policies."

SAVVAS KASTANAKI: Yes, that's a survey conducted to network operators. If I was conducting a survey ‑‑ yes of course it's a very important piece of work but we do not go and actually ask people on the routing policies. We observe them through looking glass servers. So even though actually serving network operators is very important, we also have tools to automate this process rather than asking people in meetings in conferences.

AUDIENCE SPEAKER: Antonio Prado, BGP guy. Did you consider to use RFC 9234 about only two customers and BGP roles etc.? I would like to firstly thank you for your analysis because this is a very hard field to move.

SAVVAS KASTANAKI: Unfortunately, I do not recall that RFC but if you give me some more detail I might either see it because I have some update slides or I'll just tell you my personal opinion. I don't remember that RFC.

AUDIENCE SPEAKER: Hello. So, during these years, do people change how they define their policies or it has also changed our view of the Internet? So back then we had a view of the Internet, we built a model on that but maybe today we are one to see other parts of the Internet that we didn't see.


SAVVAS KASTANAKI: Thank you for your question. So both parts are correct. There are ‑‑ there are confounding ‑‑ there are so many confounding factors that can play a role in these changes but if I only had to summarise in just one thing what is the main reason, is that the transition from a hierarchical model, which used to start back in 2003, in which we had Tier 1, 2, 3 ISPs, so on and so forth. The power was in the large ISPs back then. But based on the emergence of CDNs and ISPs, that hierarchy changed and we call it a flattening of the Internet topology. A lot of ASes decided to follow more aggressive peering policies either because peering links are less expensive or they have better performance. But the main reason we see those changes, I believe, is the flattening Internet hierarchy.

MASSIMO CANDELA: Okay. So, thank you very much for your presentation.

(Applause). I think it's time to go with the next presenter, which Simon Fernandez. Oh, it's remote. He is a post dock researcher. His research is focused on the role of DNS on privacy and security in the Internet. And today, he will talk about the presentation title is "Whois right? An analysis of Whois and RDAP consistency."

SIMON FERNANDEZ: Hello I'm... the work we did with IPv4... at the university of...... Whois consistency...
Registration information, what it is and how to get it. And then I described the type of analysis that we ran on this data. I... about registration information.

So, structure or as... in the domain names, we sometimes need to get an information about the domain. ...... who got it, and... at the same time... report...... through two main... protocol. Let's talk about both protocols.

So, the protocol is an old protocol. It's in situ and unsigned and encryption data... domains... have a. .. server and because it is old and...... Whois record. So, the... domain, we can see the win name we can see /ET registrar, we can see the creation date etc. All the information is like boils down to... however, Whois is not always as... for example in English because... the Whois entry for the domain as we can see part of it... if you cannot...... as I said before, it's...... another change we have... updates the every computer centre, sometimes we see this kind of entry inside of Whois records and then it's a plane trying to figure out in what data we are looking at. Because all of those problems... lead to our problems in 2015, a new protocol was designed called registration data access protocol... security... JSON format, it has relatively well defined data types. However, it is not used by all TLDs. All general TLDs that have an agreement with ICANN needs to provide a server. However, the major of..., so country code... so it's kind of a problem if a ccTLD has a server or not. This is part...... of google.com. This is like part of the... because it's an entry but as you can see there is not kind of the sail operation the Whois entry, we can see that the registration entry and the dates are more clearly specified from that so it's easier to pass for computers, but harder to read...
.
Let's look at how we get......... entries. ...... entry, we'll start by using the ccTLD of the domain, so here dotcom... we find that... that... that HTTP server... registry dotcom... and we get a JSON... inside of this data entry we may have the... that... google.com can be found at...
.
We can then follow this to the route server... RDAP entry. ...... if we start the same way, we start by extracting the TLD in the domain... list providing by INR and we get the domain name of server that manges the Whois server. ... through port 43 with the Whois protocol. However, within the domain the ccTLDs do not... are not present in the... and any Whois servers are not present in the... community on this with additional Whois server... and... use this to get the server... Whois data. So, in way we get to the... port 43, Whois protocol and... through form text with a format. Then... passes... we look at things like some data and... entry... Spanish or English or anything else, /UPL actually follow the server and the text report on the second server on port 43 about the Whois protocol and there is a new sense of data with potentially... a new... I am seeing...... sorry for the fast pace, sorry for the unstable Internet.

So, based on this setup, we see that we can have multiple different servers and multiple different records, but we have no guarantee that all of them actually provide the same setup. We see that we have three... two different... entries from... data protection two different texts from Whois, but... therefore. From the RDAP entry of the registry. We have the same... from the Whois entry from the registrant.

This is the original description that we tried to answer... the data we collected.

So we'll start from a list of domains from several sources and we extracted 535 million domains that had both Whois and RDAP entry available. We collected the records, this amounted to 164 million records. And we parsed their contents. Sometimes we graded the facilities for Whois entries. And we checked if the values are actually consistent.

We parsed the four main fields, we parsed...... IANA ID and e‑mails, because both fields are actually... other such works and productive records. I see a lot of feedback notes cracking mic... should I carry on. Is it better this way? It will reduce clipping for you? Please tell me if it's still too high and clipping.

Okay. I'll carry on with this one. Sorry for the clipping earlier. I hope it was still understandable.

We studied these fields... and it's used by security experts to get... for example. So, in this presentation I present specifically the names of a case. But here are the generic results that we have.

So, for each of those fields, we have a name servers, for some we were not able to find those fields. We were not able to... PA the servers for all domains. Sometimes it's because the data is not present inside of the entries, sometimes it's because well we were not able to parse it because it was not in a real format that we were able to detect. However the last column we can see in the domain inconsistency which represents the number of the domains that have at least two records that do not agree on this value. In this presentation, I focus on the name server case.

Why name servers? They represent the authoritative name servers for these domains. Because it's the... there are different types of sets of name servers can be included into one another, they be err intersecting or completely disjoined. This is the partition that we observed. Note that it does not sum up to 100 percent, as we can have multiple entries per domain we can multiple times of something per domains. We observed that in 60% of the cases, we are in the worse case called disjoint case where two lists of name servers have no name servers in... meaning there is no single name server in common between the two entries. This is the most problematic setup, because in the inclusion or intersection setup, because we may have one name server in common, it means that the letter present inside of those name servers may be the same. We couldn't trust that the letter is the same. It's from the different name servers. But in the disjoint case we have no guarantee that the letter between the different name servers are actually the same.

So we focused on the specifics of that that is the worse case from our point of view.

Remember, the kind of setup we have. We can have ‑‑ when we have two entries mismatching we can have a mismatch between two entries from the same protocol, for example the registry level LDap entry not agreeing with the registrar level RDAP entry or we can have mismatchings of the two entries of different protocols. For example Whois entry not agreeing with an RDAP entry. We observed that in 75 percent of the cases, the mismatch was between the Whois entry and the RDAP entry. And the remaining 25% of mismatches between two entries of the same protocol. Because of the referral system that hey allows different servers in the same protocol.

When checked who has the right, in the case of name servers, we are lucky, we have a ground truth. For example, for names we have no way of knowing which is the real date. But for name servers we have the DNS that provides us with an entry. That's what we did. We collected 300,000 N R records and checked. When there is another entry, who has the right volume? We observed that in 78 .5 percent of cases, RDAP had the right value, at least the value that is presented inside in the DNS. In 21% of the cases, Whois had the right list of name servers, and in 0.5 percent of cases one of them had an actual right value within the DNS.

So this points out that on average, RIR data protection is a little bit more trustworthy for the name server, but still 21% of the cases are still towards Whois. So, given a domain, we have no way of knowing who has the right value if we are not ready to check with the things
.
Let's coninclude this presentation. We need registration information for many topics as researchers, as the experts to classify domains, to detect abuses and report them for example. And we can get these from different sources. We can have different protocols, so Whois and RDAP and inside of one single protocol we have different servers. Within their own system. And we collected 164 million records were 55 million domains and we found that around 5 percent of the domains have an inconsistency somewhere. They have at least two entries that do not agree in at least one hit.

And for name servers we have a clear third party ground truth source that is the DNS, but for most other fields for example, for the abuse made contact, for the creation date, for the expiration date, we have no third party that could tell us who, which server, which protocol has the right name. This should be used with care, because given a domain I have no way of knowing if the data is collected for Whois is actually the right data.

So, that will be all for this presentation. All of the data sets that we used on this article etc. Are available online, all the codes and this data, is also presented online and now if you have any questions, I'll be happy to answer. Thanks.

STEPHEN STROWES: Thank you Simon. Questions at the microphone lines if anybody has them or in the Q&A on Meetecho is currently empty. Sorry, we tried to interrupt you but your audio was clipping at the start so it was a little tricky.

SIMON FERNANDEZ: I saw someone in the chat, sorry it took so long to check the mic.

STEPHEN STROWES: One question. You have identified a lot of mismatches. Did you approach operators to correct these mismatches?

SIMON FERNANDEZ: Sorry, in some cases we approached them, special cases for example when we detected and I N ID mismatches which was easier to detect and to be a hundred percent sure if it was a right value or wrong value, we contacted some of them and once we contacted, actually never answered. But a few months after we contacted them, they fixed the setup in their specific case we observed that the I N ID was completely invalid and but wrong number had no real certification and also any other fields like the other place holders for names for phones for e‑mails, and so everything changed after we contacted them but we had no desk with them to know if it was a configuration problem, if it was a registry problem, so, yeah, it's, it's hard to contact those and those to get them to move stuff. And more like, in more cases to get price information what happened.

STEPHEN STROWES: We have one other question in the queue.

MASSIMO CANDELA: The question is remote and the question is: "From Moritz Mueller.
"Do you look at the name server at the apparent parent or at the child? This also mismatches occasionally?

SIMON FERNANDEZ: Yeah, that was a question we had when scanning, because as you pointed out there was a paper a few years ago about mismatching is in entries between the parent and the child and sometimes the NS servers are not even present in the children, so we chose to select the name servers and at the parent's level because it was the most reliable and it made the most sense in our analysis. Sometimes they mismatch, but we considered that it was the most ‑‑ the easiest ones to collect and to analyse compared to having like a child level nodes that's our really inconsistent and hard to get.

STEPHEN STROWES: All right. Thanks Simon.

(Applause)
.
Our next speaker is Pavlos, who will be talking about analysis of usage patterns in RIPE Atlas measurements. He is a senior researcher at data lab. His main research interests are Internet routing measurements and data science, he is involved in the AI for net Monday project which he is going to talk about now which received the support of the RIPE communities project fund.

PAVLOS SERMPEZIS I am.
I'm going to talk to you about the RIPE Atlas tool. And in fact this tool is a part of a project we had the AI 4 net which is the project in 2022, now it's closer to its end. And the main goal of this project was to identify and quantify biases in Internet measurement platforms and in Internet measurements. So we have produced a lot of material. You can find some RIPE Labs articles. You can check our paper. You can go to our website. But just to give you briefly an idea what we mean by bias, how it's defined, it's when our measurements or vantage points are not representative of the Internet. So let's say for example, that 30% of networks in the world are in Europe, 30% are in US, 20% in Asia etc. If we have some in our measurements, some vantage points and let's say three from Europe, three from US, two from Asia sets we can say in terms of location we are representative so we have no bias. If we have ten vantage points from Europe, then we are biased and here a clarification, being biased does not mean you are wrong because you may have selected a vantage points in Europe because you want to measure something from Europe. But bias gives a quantification of this pattern, if something is not representative.

So, we have developed several tools about quantifying bias, and the last tool that is still in progress and we are developing, it's the RIPE Atlas management patterns analysis tool, which I'm going to talk to you about today. And for brief tee, I'm just saying it referring to it as a tool.

Its goal is to analyse patterns in a set of RIPE Atlas measurements. And when we say patterns, they can be generic patterns or bias patterns that point to some non‑representative activity or pattern.

Of course, there is an UI from RIPE Atlas that you can see a lot of information about measurements. Our main difference between the RIPE Atlas UI is that our UI is not that beautiful. Okay. But the other difference is that RIPE Atlas UI analyses the results of a measurement, for example the RTTs, etc. And it is for one measurement. In our tool, we don't analyse the results. We analyse only the set of probes. And we can analyse for many measurements together.

So, in a nutshell. What our tool does, it's a web up where you can go and put a number of RIPE Atlas measurement IDs, as those in the slide. So, you put these IDs and then our tool in our back‑end, it does some calls to the RIPE Atlas API, to the API, collects some data, performance some analysis and presents some the results on the UI, and the results are in two sections, there is general statistics and analysis statistics. Have a look, this is what it looks like. It's still a prototype in progress and development.

And the results it can give for the measurements, it shows in all the measurements, you have put there, so in this example, we have put like seven or eight measurements IDs. It shows which probes have been used in these measurements and how many times per probe. And the same thing for the ASNs that host these probes, which ASNs have been used, how many times they have been used. So you can see what ASNs have been mostly used in these measurements and do some analysis.

Then in the next section, it's about the bias analysis of measurements that gives some more specific patterns of, on what's going there in the measurements. So in the plot on the right, you can see different bars. Each bar corresponds to a bias dimension, it can be location, network size etc. The blue bars are the average value of the measurements. So it's the patterns of the measurements we analysed. And the larger the bar, the more the bias, which means that it has a specific pattern there: I remind again bias is not necessarily wrong. And for a comparison we have also the orange bars that show what a random selection of RIPE Atlas probes. So you can see which patterns are more intense or not.

The other two plots are similar, they show the same thing but in a different format. You can select between all these plots in the tool.

And also, here we have a plot where the dots correspond to different measurements of the ones you have put in the tool. X axis is the number of probes they use, Y axis is the average bias. The more probes in general the less the bias, but uses this visionalisation you can find some south liars or some strange activity that you would like to get into more details about this measurement.

And the final plot, it's a heat map a that shows why there is bias, so it's more details from the exact patterns of what happens.

So, on X axis we have all the measurements we have put there. On TTL Y axis, we have the causes that have been identified as the top causes or the top patterns. And with the red and pink colour, we have the causes that are caused because the probes in our measurements are over representing these characteristics. So for example, for location in Europe, it's around in the middle of these plot, it's all pink, which means that in these measurements we analysed there have been a lot of probes from Europe. So, there is a pattern there. And the blue is when your probes are under representative of this characteristic, so the bottom row shows a networks with customer cone between 1 and 3, which is small networks and we can see that these networks in these measurements were under represented. So these measurements were mostly from large networks.

So, summarising: This is a tool that is part of AI 4NetMon project. This is a work in progress. It can help you analyse RIPE Atlas measurements and we believe that these can be helpful both for network operators and some researchers. And why it's mainly because of three reasons.

At first, you can easily get a deeper view of your own measurements. So for example, if you put your own measurements, you get an automated reporting in our web page, you can get some bias inside, or some recommendations of what you may want to fix, if there is such a case.

Also, if you analyse measurements of other guys, you can identify some patterns, the measurements of other guys. So for example, you can learn from others about what they do, what patterns they select for a specific purpose. What should you select next.

And of course, mostly for researchers, you can do analysis of usage patterns in RIPE Atlas and scale. So you can play with the RIPE Atlas data.

And the last point I would like to highlight is that we would like your feedback and ideas for extra functionality on this tool. And here is a very, very short questionnaire we have done, with only three questions, all of them are optional, so, whatever you want, you can reply, and we would like some feedback /OPBTD reporting, if this reporting that we give, is it actually useful for you, and what information would you like to see there and what extra information, how to provide results.

Also, feedback on the user interface, if you understand the plots, the visualisation, see if you would like something else. And the last for best practices initial, because as I told you, we can use this tool to identify patterns and best practices if we give a specific set of measurements. We would like some feedback on what set of measurements should we analyse. For example, if I want to learn how people measure CDNs, should I select all measurements from country X to target ASNs that are CDNs, would that be a good use case? So, any ideas are welcome. Please help us in our work, and with that I would like to thank you.

AUDIENCE SPEAKER: Hello, Robert from RIPE NCC. Somewhat involved with RIPE Atlas.
I like what I see. Thank you very much for this work, I consider this to be complementary to what we have kind of built into the system. But this is useful from a completely different perspective, so thank you for doing that.

My question is about sustainability. Which I didn't see on the slides. So is this a project, is this an experiment? Is this something that you want to stand behind for a long time? What's your take on that?

ERIC LANFER: This work has started as part of the RIPE NCC project fund that we got funded. And now this ‑‑ we have built within this project, we had the opportunity to build different tools. So we have an API, we have a database that is automated, updated, and this tool is a web up, so it's up there and we are going to maintain for a few years more. I mean it does not extra effort. We still have some resources to put some extra effort to improve it, that's why we ask your feedback, but we are open to collaboration if someone else wants to help us on making it more sustainable.

AUDIENCE SPEAKER: I would be happy, even if we didn't help you. But thank you.

MASSIMO CANDELA: You are saying that essentially the tool that you created can be exported like open or used by another organise, in case you don't want to maintain it my more.

PAVLOS SERMPEZIS it's open source, yeah, everything is open source.

MASSIMO CANDELA: Thank you. Any other question?

AUDIENCE SPEAKER: Hi. Eve from Measurement Lab. Thank you for the presentation. I would be interested in having the same study or analysis done on the Measurement Lab data or other open datasets that. Would be very interesting to me. I also had a question: Have you considered extending the project or tool to identify when these biases lead to false positives or false negatives?

PAVLOS SERMPEZIS: What do you mean by false positives and negatives in terms of bias?

AUDIENCE SPEAKER: I guess in terms of whatever the researcher is looking toen amount of like, it would be interesting to know how often these biases I guess impact the decisions based on research.

PAVLOS SERMPEZIS okay, so here again are the clarification that bias, is in fact characteristic of your measurements. You may be biased, you may do a very auto biased measurement only from Europe but you may do it for a specific purpose, it does not mean you are wrong so it's not something bad. But for some cases, you may be biased and you didn't know that you were biased and you would like to fix it. So in that case it can be helpful to help you fix it.

MASSIMO CANDELA: Thank you very much. Any other questions? Okay. Thank you.

(Applause)

MASSIMO CANDELA: Then we go to the next presenter. Who is Erik Lanfer.

ERIC LANFER: Thanks a lot for the nice introduction. It's also my first RIPE meeting and I really enjoy being here, it's a great community, and I am really happy that I can present our work today.

In recent months, we researched a lot and measured a lot on Starlink and we came up with two data sets, so the WetLinkses dataset and/or Starlink on the road dataset, which contains mobile data. I will start to introduce to the WetLinks data set which we measured together with our colleagues from Univeristy of Twente.

So the key features of a dataset are that we measured for six months in stationery measurements, and also Austin bearing and Amsterdam, two cities in Europe. We measured in autumn and winter. We measured some key network performance indicators which I will introduce on the next slide. And one special thing, we have here is that we measured accurate weather data directly on site where the Starlink dishes are placed.

In total, we collected approximately 140 K measurement points. 80,000 from us aen bearing and 60,000 from entry an and we published that. I will give you the linking later to our data sets.

So, here on the left and right you see our measurements setup. On the left is the dish which is a generation 2 dish. On the right is the dish from entry a which is the generation 1 dish. You can see in the upright weather station it was directly placed at the dish. And on the right you see the diagram how our measurement process worked. So at first we measured throughput. UDP based used hyper 3. We decided to use UDP to avoid some issues of TCP that we would measure like congestion control and stuff like that. Next we measured RTT and packet loss using the ping tool, and after every six measurements, we also make an MTR measurement to get the current route.

And the weather data, as I said, it was collected with the streaming professional weather station called frogity DP 2000 was measured depending on the metric.

This is the architecture we used in total. You can see on the right, measurement client which was a server running hyper. It has two network interfaces, one connected to the university network where our measurements server in the data collection server were running. We tried to split that up so we had the measurements are not interfered by our control channels. Then we have a Starlink router, which we had in bay pass mode. So we had the address of the NATs directly assigned to the measurement server. Going to the user terminal we scraped the data from the terminal, so the terminal is delivering some obstruction data. And also, time data, GPS location and stuff like that. And the other version we also collected that curated it in the dataset. Next you have the connection to the satellite, to the ground station and when the traffic is leaving the network. We have seen that the ground station used is Ertson which is somewhat closer in Germany for both stations and the POP is according to the traceroutes in Frankfurt. Then we have the measurements and the data collection so that it collects the APIs dally to our own weather measurement, we collected the measurement data from the German weather service and the Dutch weather service from the closest stations to our strongage points.

Giving you a little overview on the throughput we measured. So in Osnabruck we had these numbers. It seems like the generation 1 dish is a little bit faster than the generation 2 dish. And in terms of upload we had about 15 megabits in median in Osnabruck and 16.2 in the other other one.

So, looking at a little analysis of the data. So we took all samples and put that through, according to the hour of day, to the URL cycle. And we see similar to other backbone networks, that we have some peaks when the people start working and stop streaming. Here at roughly 6, 7 o'clock. And we have some better throughputs between 4 and 5 a.m.

Next we looked into the impact of rain since we had our accurate weather data. And we grouped the rain levels into two buckets and analysed the samples and saw that in some scenarios, the download throughput can almost half when it's range out raining outside. That hit our assumptions that rain has an impact on K A, radio transmission.

And when you have rain, you usually have clouds. So we don't have a super expensive Cloud radar at our university, so we tried to analyse it somehow with the data we had. Therefore, we looked how long is the rain period on average. It was for samples, so roughly 12 minutes, and we looked into the 1 minutes before and after the rain period, and grouped that. Another dally, we brought a group with no rain since we had UV and solar radiation data, we set the threshold of 3 watts, when the sun was shining, we had the no rain group and we could see that we have roughly 10 megabits drop before and after the rain, so indicating that clouds might have an impact here. So that's it on our WetLinks dataset.

More details of course in the paper.

Next was our Starlink on the road measurement, we seemed up with our will heical energy grid provider. And they provided a van that was used by a technician who was doing some servers in the network. And here he measured in winter from January to March, in Germany again. And we measured the same network performance metrics, additionally here we measured power consumption and the this data is also available as open data without the GPS due to privacy reasons.

Measurement process is the same. The only thing that changed is that we offer power consumption recorded by a smart plug, and we were not able to put a weather station on the van. The data wouldn't be that good when the car is driving. So we just used data from the German weather service from their open API.

This is our architecture on the car. So the car has two batteries. We connected 2 kilowatt power station to the second battery on the timed relay so that when the car switches off we could charge our battery a little bit more. We had a Zigby socket that was used to switch the terminal on and off. And we had a Raspberry pie on board which observed the ignition state, did the control of everything and also collected and conducted the measurements.

Since we had no cable attached to the car, we didn't have the second link so everything was running on the, via the Starlink link. Despite that, the setup was almost similar.

And this is the data we collected. As you see, the technician is mostly operating in the city of Osnabruck, which is the sample count we have and some more samples in the countryside. I put that in the map where you can see the throughput, and as you can see, we have some points where the throughput drops to roughly 100, we took a look into that why we had this. And we found that that point, we had some more obstructions than in others of these hexagons, like a forest or a bridge, the local mountain, we found that 80 metres is around there. And down there is also a forest, and to compare that, here is the highway, so the autobahn where you usually don't have any obstruction and where we were almost able to achieve 300 megabits, as you see it's higher than we had in the WetLinks dataset, because the flat high performance different is really do high performance actually.

We looked into the impact of speed. Therefore we did a similar grouping like with our rain values. We grouped the samples to speed buckets. Since the operation was mostly in the inner city where you have a speed limit of 50k M, we don't have that much samples at higher speeds. However, we see that we have a significant difference between standing and moving vehicle. We assume that the decrease of the throughput rate is because of the obstructions, so we don't see any more decrease with accelerating speed. So it's more likely that when you are driving through the city and you have more obstructions and stuff like that, but this is limiting your throughput rate.

And as we see, I put the WetLinks blot next to it to have a comparison, the high performance dish is doing much more throughput in not moving scenarios.

Concluding this. We put up two curated data sets containing stationery and mobile Starlink data. We observed some issue in the time of day where the throughput decreases at some times. We were able to show that rain has an impact on the download throughput, upload throughput is not that second that, it's way more stable. We observed, or we got some hints that there might be some Cloud interference, we will dig deeper into that in future work. And we saw that stationery performance is better than mobile performance. So in stationery, you have 10% more download throughput. The dish version has an impact. Even with generation 2 and 1, Gen 2 is showing better latency and Jen 1 is showing better throughput rates. Lastly, we had some power consumption issues in our mobile setup since the high performance dish is using on average 113 watts peaking up to 190 when the heating is coming into the game as we measured in the winter, and our car was only delivering 90 watts, so, we have to improve that setup a little bit maybe.

That's it. Thanks very much for your attention. I would be happy to answer your questions. This is my website there. You can find the preprint of the papers. We will present the papers on Friday at TMA conference. You got a first look now into that. And yeah, feel free to ask questions. Thank you.

(Applause)

STEPHEN STROWES: You heard that right. Erik is scooping his own work by presenting it here before it's presented at the TMA. I want to follow on from your final point on your conclusion slides. Can you do anything about the power consumption of these dishes?

ERIC LANFER: Yeah. So we converted the power from DC to AC and back to DC again. We think we have some efficiency there, so maybe we can directly attach the Starlink dish to the car consumption let's say as to the car network. We are now working on that to save a little bit more energy. However we think about since now winter is over that we can put solar panel on the roof of a car that we can do some energy harvesting while the car is standing and there are measurements around it.

STEPHEN STROWES: Cool. Daniel.

AUDIENCE SPEAKER: Daniel Karrenberg, fellow Starlink user. I have a 12 vote director power supplier for the Starlink dish, which also doesn't use the router. So that's much much better. See me afterwards.

ERIC LANFER: Thank you.

AUDIENCE SPEAKER: Ben, BGP tools. Just follow on from the DC comment. There are some nice New Zealanders who have designed DC only boards to power the ‑‑

ERIC LANFER: I think I saw their post.

AUDIENCE SPEAKER: It requires a special kind of POE because it's a lot of power over POE.

ERIC LANFER: Thank you for that remark.

STEPHEN STROWES: Okay. And the queues are empty. So let's all thank Erik again. Thank you very much.

(Applause)
.
Final presentation in the block is Robert committees Leckie from the NCC so give us another RIPE NCC tools update.

ROBERT KISTELECKI: Hello everyone, I am Robert from the NC. I work as a principal engineer, mostly with measurements and tools. So, I'm going to give you the customary RIPE NCC tools update this time.

First of all RIPE Atlas. The top section is what I said like word to word the last time. So it's kind of a back reference if you will, so I'd like to report on where we are with those things.

First of all reviewing the big data back‑end. As you probably heard we have a large cluster of machines serving, Stork armed serving back the data that we collect in RIPE Atlas. And that was one of the things where we needed to work on because it's ageing. The current status that we have moved all of the historical data up to until two weeks or so out of this system and now it's stored on Amazon in S3 and if you are asking for historical data before 2022 I think, right now you are served from that cluster already and you probably don't even notice. So that's a good thing.

As I said all the da is extracted already, it's just a configuration switch. Maybe next week we are going to serve everything that is historical from this cluster. What is happening right now is what we call the hot data. So basically the last two weeks, two to four weeks, which is still in our H based cluster. We are going to move that on to a different cluster, which is now a cloud based one on rented machines.

The point of this exercise is mostly to reduce the cost associated with serving the data back to you. But also some efficiencies along the way.

We expect to finish this work by the end of the summer.

Next one. Infrastructure: So this is everything else but the big data back‑end behind the probes basically. Again, the top part is what I said the last time. The current status is that we switched over from an on premise elastic search cluster to a loud based open search one for what we call the measurement metadata. If you are asking for what kind of measurements do you have in the system? What was running against this target at this point in time and so on. Those questions are answered from this one.

And this is also for the cost reduction part.

We have containerised some parts of the system and we are starting up I think as of today we have the first controller in the Cloud serving about 100 probes so we are looking at how it behaves, whether it's different, better, worse than what we had before. And the ultimate aim here as well we want to have a more flexible infrastructure reducing the cost for this operation.

And this is expected to finish around end of Q3.

The user interface is going through you a couple changes as you probably have observed. Last week, Monday I think, we released another big batch that included a revamped measurement pages. Now you have a my dashboard which is customisable. You can direct things left and right in your order of preference. We have a completely new set of promotional pages which has been reworked to give an introduction to people who have in the been with Atlas for not familiar with Atlas so it gives them a starting point.

Tomorrow, and on Thursday, between the first break and the lunch break I think and during the session, Stefan and probably me too, we are going to be there at the coffee stand, so if you'd like to have an introduction to the UI or you have general questions and observations, please come and talk to us. Mikhail is going to be there, so there's going to be some of us who will be able to answer these questions.

All right. Just highlighting the new measurement UI. My favourite, so left is the old one. Right is the new one. My personal favourite, honestly, is the one in the front. We had for a long time a feature where you could add and remove probes from existing measurements and sometimes people actually used this to kind of refresh or update the probes. So now we actually made it possible where you can just click on the user interface and say all the dead probes make them go away and give me fresh once for example. So basically almost two clicks, maybe through clicks away for you to kind of revamp and he refresh your measurements if you want to. But there are other features as well in the new UI. As I said please come and talk to us or just explore on your own.

We have also been working on repackaging the firm way. The current status is we are almost there to to release a RedHat based package. The next one is going to be Debian and OpenWRT. The point of this exercise is we want to make the development easier on one side. For example if there is a code change in the probe firmware, then that change is made in the repository and then at the other end of the pipeline we really want to POP out RPMs, open source WRTs and everything that the users can use. That's why we are doing this.

I put out some proposals earlier this year and also in last year, so I would want to reflect on those like where we are with them. We are ourselves highlighted it internally before with you we then had the public discussion about t there are some users who like to run a lot of software probes, and the suspicion that they do this because they want to collect credits. This is not that useful for the system and it's not that useful for the users either. Therefore, there was a discussion about what to do about it and the end of the that discussion is basically we are going to apply those restrategic that is we proposed. So there is going to be some kind of curtailing of how many software probes you can run, purely for running, purely for collecting credits so to speak. If you want to run more than what the system allows you, you can always reach out to us and explain why you want to do that, if there is a good explanation, then of course we'll tweak the system.

Measurement aggregator. This was another topic. There are some users who serve their clients by channelling the request to Atlas, but in this case we don't necessarily know how many clients there are. So the system is unaware of the real usage, so to speak. So the proposal was to actually recognise this, this is fine, this is happening, but then please, if you are an aggregate err, then let us know an indication of who the clients it. We don't want to have e‑mail addresses or anything like that but we would like to know if client A is different than client B. That's all we want to know. And therefore some other properties here so we would like these people and users to step up as sponsors because they are generally speaking using more resources from the system. And of course if the business requirements dictates, so then we might have to enforce other rules as well.
.
Data retention. This was a very high level proposal that we put basically along the lines of we are going to reduce costs. We will be very careful about these. But then in return, the service level for older data is probably going to believe lower than the service level for a newer data. As I said the technical solution is such that the difference is very, very little. So this might just end up being a cost saving exercise and not a reduction in functionality.

Maybe it's worth noting that Felipe put out an article about what this really means. Some technical details as well. So please go and read that article. It's highly entertaining as well.

And these are earlier, earlier proposals which we realised that we put them out and we actually started working along the lines of what the discussions dictated but we never properly came back to the community to say okay, so here is what we're going to do. So I'm trying to backfill this so to speak.

There is a very detailed technical problem of how we collect so‑called late packets in traceroutes. Please go ahead and read the proposals if you want to know the details. Basically we are going to simplify what's happening there. We also proposed that the system will start measuring high profile CDNs. This had both pros and cons in the discussions. But there is no clear consensus that this is actually the value that we want to extract. So we're not going to implement that as we proposed it back in the day but we are trying to do a small exercise to see if this is actually going to give the value that we thought it's going to do, and based on that maybe do a follow‑up.

Generic HTTP measurements. This has been discussed before. We tried again and the discussions from inconclusive. So some people would want this. Some others who hate this had, so for the moment we're not going to do it nuns the community wishes otherwise.

Add support for a start TTLS measurements. There was a clear message please go ahead but please be careful about how far you go with in. For example, on SMTP it's nice you get the certificate from a server but please do not send e‑mail. Which we will not.

Then finally remove support for non‑public measurements. There was a discussion about whether this was useful as a feature in the system. That is to some people, actually anyone, can have a public measurement that is marked as non‑public. So the results are not available to others. So that's kind of the lower value. But then they get some other value out of that. So the discussion is basically we are going to leave it as it is. The only change there is highly likely connected back to the data retention proposal where we said fine, but in this case we are not going to retain that private data for long because it doesn't provide value to anyone else, therefore it's only costing the community. So that's going to be kept for a shorter time.

Okay. Moving on.
RIS:
The RIS team is busy with mostly these things. So as you can imagine sometimes the hardware behind RIS has to be refreshed. Sometimes the software needs to be refreshed. So the current exercise is about renewing the operating system behind the route collectors.

We are also looking at how the various RIS peers are behaving. As you can imagine, they are not behaving exactly the same way. Some of them are more noisy, some of them are less noisy. But they all affect the processes that are creating the dump stat that you probably all consume. We want to have a better understanding of which ones are contributing to the noise and contributing to the amount of work that we have to do in order to put out the data at the end of the day for you.

We are also looking at how to improve the whole machinery, how to deal with, in particular with the stuck routes. Some of you have seen that every now and then RIS says this route is still exists, it still exists, although in real life it doesn't and that's probably because there is one of the stuck peers as we know it. There might be bugs lurking so the message to you is that if you see such a thing and you kind of can give us the signal to notice this, we can look into continued say was this real Tor is there something that we need to fix. So at the end of the day the quality of the out coming data is better.

And then finally, the team running RIS and the development of RIS is going to focus, well for now, it's the same team focusing on the RIPE Atlas back‑ends, so it's the big data. And TTL their next task is basically to look at the data sets behind RIPEstat and RIS, and to optimise that, similarly to what they do with Atlas but not exactly the same because the systems are slightly different.

And then finally, RipeStat. Mostly is business as usual. There is one feature that is useful to highlight the team has been adding new data calls to basically give you metadata about how old and new the data is behind rape stat for various data calls it's been enabled for some of the data calls. We are integrating this into the service to when you ask for data you ask the UI to explain something to you, it's properly annotated about what's current and what's old and so on and so forth.

And in the bigger scheme of things we are looking at how to do the product strategy in the longer run so whether changes are needed, and if so what are we go to do about that. And that is planned basically for the summer after the RIPE meeting.

And that's all I have to say. If there are questions I'm happy to answer them. Canned can thank you very much Robert.

(Applause)

We have a question from the Internet.

STEPHEN STROWES: Randy Bush IIJ.
"As both a researcher and an operator I would be quite interested in the measurements and modelling you used to assess cost reductions in moving from self hosted to AWS."

ROBERT KISTELECKI: That's fair. I can totally imagine that, you know, at the next RIPE meeting we'll come back to you and kind of explain the experience, so to speak, what worked and what didn't and what path we took to actually make this real. I cannot promise that but it could be a good topic.

AUDIENCE SPEAKER: Pavel: I have a question also about cost savings. Could you add some numbers either absolute or creative, how much it costs and then what we are looking for.

ROBERT KISTELECKI: I would recommend you read the article I highlighted also in the presentation because Felipe actually went on in the details and also mentioned particular numbers there.

HANS PETTER HOLEN: If you want to know more about the cost savings and the infrastructure, please come to the Services Working Group tomorrow where Felipe will present on that. He has a lot of interesting numbers and pictures on what we are doing on the back‑end side on the infrastructure.

STEPHEN STROWES: Moritz mule err. "He has recently used the RIPE Atlas measurement in the Google BigQuery interface for the fires time he says they were unsurprisingly expensive but also quite useful. Any idea how often this data is being used?"

ROBERT KISTELECKI: Oh. Yes. So we have some users on that. Admittedly we were trying to be careful about really going out and saying oh this is available, use it. So it's basically in a beta stage. We are trying to look at the value versus the cost it provides. So the value is provides versus the cost that we observe to make a determination of the future for that can be. Canned can thank you very much Robert. I think we have another question.

AUDIENCE SPEAKER: Daniel Karrenberg. Not so much a question: I have been involved with Atlas somehow and the other measurements projects. I think it's good for this group actually to take note of the bigger discussion that's going on in the RIPE community and especially also in the RIPE NCC membership about these measurements activities that we're doing. There are more and more people who question whether the RIPE NCC should be doing this stuff at all. So, if you want to keep your toys, excuse me, if you want to keep those useful things running, then you should take note of this discussion and do your best to actually influence it in your sense, and it might be not you yourself but maybe somebody else in your organisation, whatever. So, don't be blind. There is a discussion about this stuff going on outside this room.

Number 2: It would be extremely helpful if we found more people who are actually using things like Atlas to produce commercial products or commercially products to actually sponsor us, because this other discussion I just mentioned becomes untenable if the argument can be that there is people making money off this stuff that actually the RIPE NCC membership funds.

So, I understand of course that academics ‑‑ I'm not asking academics, I am asking people who are commercial or doing commercial or commercial‑ish stuff. And it's also about the academics of course who knew about this, about things like that to actually talk to those people.

It's been a good ride for the past 20 years I think, but I think there is change coming. So don't be fooled. Face reality. Thank you.

MASSIMO CANDELA: So we have Randy who has a question.

RANDY BUSH: Hi. Just to be clear on my previous question, Robert. I am not interested in micromanaging RIPE or Atlas. If I did I'd move to Amsterdam. My question really is, I mean many of us face the issues of should we be hosting our own stuff or moving to the Cloud? And my intuition is moving to the Cloud may cost you the same except I'm going to pay 1,000 Amazon salaries. So, I suspect you actually did a reasonable analysis. So I'm really interested in the analysis and modelling and not questioning your decisions or discussing the RIPE budget.

ROBERT KISTELECKI: Thank you for that Randy. We indeed looked at the numbers and you know what can we do with what we have on premise and given the current direction of, you know, trying to save costs wherever is possible, we looked at the alternatives, and the decision we made is basically we are going to go ahead this way, because we believe that it will reduce the costs without doing much harm to the service that we do.

RANDY BUSH: As I said I assumed that. I assumed. But it's the model and how you modelled it and measured it, which is interesting to me. And it's intellectually interesting. As I said I'm not interested in micromanaging your budget.

MASSIMO CANDELA: Thank you Robert. I think there will be more questions but we are already over time. So, we are going to go straight to the closing remarks.

Closing remarks:
.
So the first thing I would like to say is that as usual, we have a mailing list. I have sometimes tried to remind you, researchers in particular, that we have a mailing list and apparently they are not too, let's say, fashionable nowadays, but we have a mailing list of the MAT Working Group, you can find here. And that would be the publication for the next call for presentations and also if you have any feedback about to us about the content, you can send us an e‑mail to the MAT Working Group Chair mailing list, which is instead the one that you see listed at the top.

Then there is a quick announcement from the RIPE NCC that they asked me to do. So, there is now in I think 20 minutes, the academic and NREN reception, which is in the room essentially that is in that direction, but due to walls, you have to go like this in the corridor, and yeah, so please join this meeting.

And we basically are at the end. I would like to thank you for being here. I would like to thank you, my co‑chairs Nina, she could not attend, and Stephen, that is there. And helped me a lot.

And thank you of course for being here and also thank you to the stenographer, sorry if made you suffer today. See you in Prague.



LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.