Time-Weighted Correlation Approach to Identify High Delay Links in Internet Service Providers
Companies and Internet Service Providers (ISPs) apply monitoring tools over network infrastructure, encompassing regular performance evaluations, with a primary focus on delivering crucial information about the current state of the network infrastructure and, consequently, the services running on it. However, these monitoring tools require ongoing development to handle more complex tasks, such as detecting performance issues. Within this context, this article proposes a mechanism for identifying high delays and communication links in the network that may cause these performance issues, using a temporally formulated Impact Score. This Score is based on data correlation techniques applied to information collected by monitoring tools. Experiments conducted with real data from the RNP Network indicate the efficiency of the proposal in identifying links impacting data communication, resulting in high end-to-end delays.
- Research Article
2
- 10.1109/tnsm.2022.3179310
- Sep 1, 2022
- IEEE Transactions on Network and Service Management
It has been a long demand of Internet Service Providers (ISPs) that the Content Providers (CPs) share their profits for investments in network infrastructure. In this paper, we study profit sharing contracts between a CP with multiple ISPs. Each ISP commits to improving the Quality of Service (QoS) for the end-users through higher investments efforts. The CP agrees to share the profits due to the resulting higher demand for its content. We first model non-cooperative interaction between the CP and the ISPs as a two-stage Stackelberg game. CP is the leader that decides what fraction of its profits will be shared with the ISPs. Each ISP then simultaneously decides the amount of effort (investment) to enhance network quality. Here, CP cannot observe individual effort by the ISPs, which poses a challenge for the CP to decide how to share the profits with each ISP. Therefore, we also investigate a cooperative scenario, where the CP only decides the total share it gives to the ISPs, and each ISP then cooperatively shares the profit among themselves. We study the effect of such cooperation between the ISPs by building a Nash Bargaining based model. We show that the collaboration improves total effort by the ISPs and the payoff of the CP.
- Conference Article
19
- 10.1109/infocom.2017.8057123
- May 1, 2017
Growing mobile data usage has led to end users paying substantial data costs, while Internet service providers (ISPs) struggle to upgrade their networks to keep up with demand and maintain high quality-of-service (QoS). This problem is particularly severe for smaller ISPs with less capital. Instead of simply upgrading their network infrastructure, ISPs can pool their networks to provide a good QoS and attract more users. Such a vISP (virtual ISP), for example, Google's Project Fi, allows users to access any of its partner ISPs' networks. We provide the first systematic analysis of a vISP's economic impact, showing that the vISP provides a viable solution for smaller ISPs attempting to attract more users, but may not maintain a positive profit if users' data demands evolve. To do so, we consider users' decisions of whether to defect from their current ISP to the vISP, as well as ISPs' decisions on whether to partner with the vISP. We derive the vISP's dependence on user behavior and partner ISPs: users with very light or very heavy usage are the most likely to defect, while ISPs with heavy-usage customers can benefit from declining to partner with the vISP. Our analytical results are verified with extensive numerical simulations.
- Research Article
1
- 10.12785/amis/080530
- Sep 1, 2014
- Applied Mathematics & Information Sciences
As malicious traffic from botnets now threatens the network infrastructure of Internet Service Providers (ISPs), the importance of controlling botnets is greater than ever before.However, it is not easy to handle rapidly evolving botnets efficiently because of the highly evolved detection avoidance techniques used by botnet makers.Further, nowadays, Distributed Denial of Service (DDoS) attacks can compromise not only specific target sites but also the entire network infrastructure, as high-bandwidth Internet services are now being provided.Thus, ISPs are deploying their own defense systems to prevent DDoS attacks and protect their network infrastructure.However, the new problem ISPs confront is that botnet masters also try to destroy their defense systems to make their attack successful.ISPs can mitigate DDoS through botnet-specific management by taking preemptive measures, such as the proactive reverse engineering of suspicious code and the use of honeypots.This paper illustrates an advanced DDoS defense technique for the use of ISPs with a real case study of the technique's implementation.This technique was proven very effective method for controlling botnets, and we could confirm this effectiveness in a real ISP environment.
- Research Article
4
- 10.1109/tnet.2020.2977198
- Apr 1, 2020
- IEEE/ACM Transactions on Networking
Growing mobile data usage has led to end users paying substantial data costs, while Internet service providers (ISPs) struggle to upgrade their networks to keep up with demand and maintain high quality-of-service (QoS). This problem is particularly severe for smaller ISPs with less capital. Instead of simply upgrading their network infrastructure, ISPs can pool their networks to provide a good QoS and attract more users. Such a vISP (virtual ISP), for example, Google's Project Fi, allows users to access any of its partner ISPs' networks. We provide the first systematic analysis of a vISP's economic impact, showing that the vISP provides a viable solution for smaller ISPs attempting to attract more users, but may not maintain a positive profit if users' data demands evolve. To do so, we consider users' decisions of whether to defect from their current ISP to the vISP, as well as existing ISPs' decisions on whether to partner with the vISP. We derive the vISP's dependence on user behavior and partner ISPs: users with very light or very heavy usage are the most likely to defect, while ISPs with heavy-usage customers can benefit from declining to partner with the vISP. Our analytical results are verified with extensive numerical simulations.
- Research Article
11
- 10.1145/3040230.3040242
- Jan 12, 2017
- ACM SIGMETRICS Performance Evaluation Review
The recent appearance of Over-the-Top (OTT) providers, who offer similar services (e.g., voice and messaging) to those of the existing Internet Service Providers (ISPs), was the main reason for a long-standing conversation with regard to the network neutrality, i.e., the prioritization of different types of data in the network. In particular, ISPs oppose network neutrality, claiming that OTT companies: (i) have conflicting interests and provide competitive services, thus constituting a threat to their own growth, and (ii) distort incentives for investment, as they essentially exploit the network already deployed by ISPs, acting as free riders. The importance of the net neutrality debate has motivated the research community to study the interaction among the different tenants from a theoretical point of view [1,2]. Despite the interesting theoretical conclusions of the existing works, an empirical econometric study on the interaction of the new stakeholders was not possible hitherto, as the main evolution of the OTT companies took place at the end of the last decade and, therefore, real economic data from the actual progress of these firms were not available until recently. In this article, we provide a detailed econometric study to analyze the relationship between the OTT companies and the ISPs. The empirical analysis has been conducted for seven countries in the period 2008-2013, considering ten major ISPs and three OTT companies that offer communication services (i.e., Skype, Facebook and WhatsApp), while we focus on five different parameters: (i) the revenues of the ISPs, (ii) the revenues of the OTT providers, (iii) the Capital Expenditure (CAPEX) of the ISPs, (iv) the Internet penetration, and (v) the real Gross Domestic Product (GDP) that determines the economic performance of each country. For the analysis of our cross-sectional time series (countries and year) panel data, we propose two econometric models (based on the fixed effects model) with two different dependent variables: (i) Model A with the ISP revenues as the dependent variable and (ii) Model B with the OTT revenues as the dependent variable. The interpretation of the results of Model A reveals two very intriguing insights. First, we see that the revenues of the ISPs and the OTT companies are positively correlated with a particular coefficient of 9.81, i.e., the increase of one unit (e.g., USD) in the revenue of the OTT providers causes an average increase of approximately ten units in the revenues of ISPs. Second, the CAPEX of the ISPs has also a positive effect in their revenue with a coefficient of 3.21. The positive correlation between the revenues of the OTT companies and the ISPs is also verified in Model B with a coefficient of 0.03, which implies that the growth of ISPs has a positive (although small) impact on the growth of OTT providers. However, the most important conclusion that can be extracted by Model B is the negative impact that the CAPEX has on the OTT profits. More specifically, the revenue of the OTT companies is reduced by 0.13 units for every unit that the ISPs invest on the network infrastructure. The observations of our empirical analysis are very important, as they provide tangible arguments and answers to the claims of the net neutrality opponents. In particular, our study has shown that the economic prosperity of the OTT firms has a positive influence in the financial performance of the ISPs. Consequently, it can be concluded that these two important stakeholders fruitfully coexist in the telecommunications and Internet market and they should probably work more closely together to achieve a mutually profitable cooperation. In addition, our empirical results have also demonstrated that the network investments have a positive effect on the ISPs revenue and a negative impact on the revenue of the OTT providers, thus refuting the accusations towards OTT companies for free riding. Finally, although not exhaustive, our study stresses the need for additional similar studies that will further clarify the interaction among the different entities in the evolving Internet ecosystem.
- Research Article
2
- 10.1016/j.peva.2020.102152
- Oct 3, 2020
- Performance Evaluation
Revenue sharing on the Internet: A case for going soft on neutrality regulations
- Research Article
- 10.1145/3453953.3453978
- Mar 5, 2021
- ACM SIGMETRICS Performance Evaluation Review
Revenue sharing contracts between Content Providers (CPs) and Internet Service Providers (ISPs) can act as leverage for enhancing the infrastructure of the Internet. ISPs can be incentivised to make investments in network infrastructure that improve Quality of Service (QoS) for users if attractive contracts are negotiated between them and CPs. The idea here is that part of the revenue of CPs is shared with ISPs to invest in infrastructure improvement. We propose a model in which CPs (leaders) determine contracts, and an ISP (follower) reacts by strategically determining the infrastructure enhancement (effort) for each CP. Two cases are studied: (i) the ISP differentiates between the CPs and puts (potentially) a different level of efforts to improve the QoS of each CP, and (ii) the ISP does not differentiate between CPs and makes equal amount of effort for all the CPs. The last scenario can be viewed as neutral behavior by the ISP. Our analysis of optimal contracts shows that preference of CPs for the neutral and non-neutral regime depends on their monetizing power - CPs which can better monetize their demand tend to prefer non-neutral regime whereas the weaker CPs tend to prefer the neutral regime. Interestingly, ISP revenue, as well as social utility, are also found to be higher under the non-neutral regime. We then propose an intermediate regulatory regime that we call "soft-neutral", where efforts put by the ISP for all the CPs need not be equal same but the disparity is not wide. We show that the soft-neutral regime alleviates the loss in social utility in the neutral regime and the outcome further improves when CPs determine their contracts through bargaining.
- Conference Article
17
- 10.23919/wiopt.2017.7959885
- May 1, 2017
Internet service providers (ISPs) struggle to invest in upgrading their networks to catch up with growing mobile data demand, while users have to face significant data overage fees. Pooling ISPs' network infrastructures can potentially enable better user experience and lower prices. For example, Google recently launched a cross-carrier MVNO (mobile virtual network operator) data plan called Project Fi, where users' devices can automatically access either of two partner cellular networks or any available open WiFi network. We consider the economic impact of cross-carrier MVNOs on the mobile data market. We begin by analyzing a network selection strategy that optimizes cross-carrier users' costs. We then study ISPs' behavior, deriving the prices that partner ISPs charge the cross-carrier MVNO and that the cross-carrier MVNO charges its end users. Although the cross-carrier MVNO may lose money from selling data, it can offset this loss with side revenue, e.g., advertisement revenue when users consume more content. We derive conditions under which the cross-carrier MVNO achieves a profit and its users reduce their costs. Finally, we use a real-world network quality dataset to simulate users' network selection behavior and demonstrate the benefits of the ISP competition brought by the cross-carrier MVNO.
- Research Article
6
- 10.1109/tnet.2020.3010550
- Oct 1, 2020
- IEEE/ACM Transactions on Networking
We examine competition between two Internet Service Providers (ISPs), where the first ISP provides basic Internet service, while the second ISP provides Internet service plus content, i.e., enhanced service, where the first ISP can partner with a Content Provider to provide the same content as the second ISP. When such a partnering arrangement occurs, the Content Provider pays the first ISP a transfer price for delivering the content. Users have heterogeneous preferences, and each in general faces three options: (1) buy basic Internet service from the first ISP; (2) buy enhanced service from the second ISP; or (3) buy enhanced service jointly from the first ISP and the Content Provider. We derive results on the existence and uniqueness of a Nash equilibrium, and provide closed-form expressions for the prices, user masses, and profits of the two ISPs and the Content Provider. When the first ISP has the ability to choose the transfer price, then when congestion is linear in the load, it is never optimal for the first ISP to set a negative transfer price in the hope of attracting more revenue from additional customers desiring enhanced service. Conversely, when congestion is sufficiently super-linear, the optimal strategy for the first ISP is either to set a negative transfer price (subsidizing the Content Provider) or to set a high transfer price that shuts the Content Provider out of the market.
- Conference Article
- 10.1145/2383276.2383309
- Jun 22, 2012
The paper describes an Internet Protocol (IP) version 6 (v6) introduction to an IP version 4 (v4) Internet Service Provider (ISP) network infrastructure. The case study driver is an ISP willing to introduce a new killer service related to Internet of Things (IoT) style building automation. The ISP and cooperation of third party companies specialized in building automation will provide the service. The ISP has to deliver the network access layer and to accommodate the building automation solution traffic throughout its network infrastructure. The third party companies are system integrators and building automation solution vendors. IPv6 is suitable for such solutions due to the following reasons. The operator can't accommodate large number of IPv4 embedded devices in its current network due to the lack of address space and the fact that many of those will need clear 2-way IP communication channel.The Authors propose a strategy for IPv6 introduction into operator infrastructure based on the current network architecture, present service portfolio and several transition mechanisms. The strategy has been tested in laboratory with setup close or identical to the current operator's network. The criteria for a successful experiment is full two-way IPv6 application layer connectivity between the IPv6 server and the IPv6 Internet of Things (IoT) cloud.
- Research Article
1
- 10.24138/jcomss.v11i1.116
- Mar 23, 2015
- Journal of Communications Software and Systems
Nowadays, many P2P applications proliferate in the Internet. The attractiveness of many of these systems relies on the collaborative approach used to exchange large resources without the dependence and associated constraints of centralized approaches where a single server is responsible to handle all the requests from the clients. As consequence, some P2P systems are also interesting and cost-effective approaches to be adopted by content providers and other Internet players. However, there areseveral coexistence problems between P2P applications and Internet Service Providers (ISPs) due to the unforeseeable behavior of P2P traffic aggregates in ISP infrastructures.In this context, this work proposes a collaborative P2P/ISPsystem able to underpin the development of novel Traffic Engineering (TE) mechanisms contributing for a better coexistence between P2P applications and ISPs. Using the devised system, two TE methods are described being able to estimate and control the impact of P2P traffic aggregates on the ISP network links. One of the TE methods allows that ISP administrators are able to foresee the expected impact that a given P2P swarm will have in the underlying network infrastructure. The other TE method enables the definition of ISP friendly P2P topologies, where specific network links are protected from P2P traffic. As result, the proposed system and associated mechanisms will contributefor improved ISP resource management tasks and to foster the deployment of innovative ISP-friendly systems.
- Conference Article
1
- 10.1109/idea49133.2020.9170660
- Feb 1, 2020
Quality of Service (QoS) is one of the most important success criteria for Internet Service Providers (ISP) who own network infrastructures, such as transmission media for both wired and wireless networks, speed of communication(bandwidth) to the Internet, etc. But As a user, we need to advance resource management to maximize the utilization of our resources according to our requirements. Internet Service Providers (ISP) provides only a fixed amount of data according to plan purchase by the user with maximums possible speed(bandwidth) of the user network circle. To use the data pack by device, need an advanced system to manage all data smartly. The main goal of this work is to combine both analysis user needs and analysis user data to find the most possible accurate path to use data. The device calculates the most accurate way to use all data and automatically manage uses speed (bandwidth) to after the end of the package duration data full uses. Today we perform most of the work help of internet in a recent survey we see the use of the internet in per days increase rapidly and Internet Service Providers (ISP) in order to improve their products and services and provide day to day more speed(bandwidth) and reliable service so we need very carefully to use our data pack otherwise we consume data in very less time some time it's not beneficial. Now a day's researchers dealing with how to improve the infrastructure of network and how to provide more bandwidth to our user so they feel the better experience. Present days Internet Service Providers (ISP) to control Bandwidth use a static algorithm. In this time Internet Service Providers (ISP) some statics technique to control bandwidth like after use of 80 percent of they reduce speed to fixed basics speed some ISP not reduce the speed they continue till data pack, not complete. Now this day for a fixed plan of data pack we need a smart Recommender Systems to manage speed (Bandwidth) to proper use of all available data according to everyone need.
- Conference Article
1
- 10.1109/comsnets.2019.8711396
- Jan 1, 2019
An internet service provider (ISP) tries to maximize profit by choosing an appropriate price for an offered service. In this paper, a system of two identical ISPs is considered. When ISPs independently choose a price to charge to maximize profit and each ISP repeatedly tries to improve profit by changing the price till no further improvements are possible, they may reach an equilibrium called the Nash equilibrium. Another strategy is that one ISP announces an appropriate price in advance and the other ISP chooses the best response strategy. This leads to Stackelberg competition. Both these strategies are compared in the paper. In this paper, it is assumed that few clients have access to multiple ISPs by using multi-SIM handheld devices such as smartphones and tablets. The clients who have access to multiple-ISPs choose an appropriate ISP for service based on the price of the service. The clients who have access to only one ISP, have no choice but to take service from the ISP. Simulations show that the ISP who announces an appropriate price in advance is likely to do better as compared to the ISP who decides the price later after observing the price of the announcer. Simulations also show that announcing an appropriate price in advance is better instead of independently deciding the price which may lead to a Nash equilibrium in terms of profit of the ISPs and quality of service provided to the clients.
- Conference Article
3
- 10.1109/apnoms.2017.8094138
- Sep 1, 2017
The number of people with the internet access is increasing. Many people now want a reliable internet service. An internet service provider (ISP) wants to maximize his profit. Whenever an ISP has to make a choice between profit and quality of service (QoS) provided to clients, an ISP is likely to choose profit. It is possible to put pressure on an ISP to provide a good internet service by imposing penalties when bad services are provided. So an ISP will try to maximize his profit and also minimize the penalties paid to clients. The question here is in choosing the appropriate penalty value. In this paper, a system of ISPs and clients is considered. Clients have access to multiple ISPs using multiple SIM cards. An ISP provides internet service guarantee and whenever clients do not get the assured service, a predefined penalty is paid to the clients. The ISP uses best available solution to maximize profit. The system is simulated to find the effect of the penalty value on the average QoS that clients receive. Simulations show that during congestion, the average minimum QoS depends on only the penalty value. This shows that irrespective of the other factors such as ISP size, number of customers of each ISP; the penalty value is the only factor that decides the average minimum QoS that clients get. Based on simulations, a mathematical equation is obtained that shows the relation between the average minimum QoS and the penalty value. This can be used to find the appropriate penalty value that can provide the desired average minimum QoS to clients.
- Book Chapter
- 10.1007/11780502_3
- Jan 1, 2006
In this paper, we consider interactions of Internet Service Providers (ISPs) and how these interactions can affect the overall traffic and resource allocation between ISPs. In particular, we consider a simplified two-level hierarchical model wherein there are a single tier-1 ISP and N > 1 tier-2 ISPs. Each tier-2 ISP needs to pay the tier-1 ISP for the connectivity service. At the same time, a tier-2 ISP can also arrange to have “private peering” links with other tier-2 ISPs. Therefore, each tier-2 ISP can optimize its utility by deciding on the proper traffic routing of transmitting traffic, either via the tier-1 ISP link, or via the private peering link with other tier-2 ISPs. The tier-1 ISP, on the other hand, needs to decide on the proper resource allocation for all its tier-2 peers so as to avoid monopolization of its resource by a single peer (i.e., to achieve customer diversity). We investigate a distributed framework wherein a tier-1 ISP can achieve customer diversity while tier-2 peers can perform their utility maximization in terms of traffic routing. We also explore other important issues such as sensitivity and convergency. Extensive simulations are carried out to quantify the merits of the proposed distributed framework.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.