Abstract

Intentional or unintentional, service denial leads to substantial economic and reputation losses to the users and the web-service provider. However, it is possible to take proper measures only if we understand and quantify the impact of such anomalies on the victim. In this paper, essential performance metrics distinguishing both transmission issues and application issues have been discussed and evaluated. The legitimate and attack traffic has been synthetically generated in hybrid testbed using open-source software tools. The experiment covers two scenarios, representing DDoS attacks and Flash Events, with varying attack strengths to analyze the impact of anomalies on the server and the network. It has been demonstrated that as the traffic surges, response time increases, and the performance of the target web-server degrades. The performance of the server and the network is measured using various network level, application level, and aggregate level metrics, including throughput, average response time, number of legitimate active connections and percentage of failed transactions.

Highlights

  • In the event of network traffic anomaly, the users get to grips with either a drastic slowdown of the service or a complete outage

  • The paper defines performance metrics quantifying the quality of service (QoS) of the web server during normal conditions and under the increased traffic load

  • The metric may measure the aggregate performance of networks such as throughput, or it may work on application level like transaction www.ijacsa.thesai.org duration or at a packet level, such as a number of retransmissions [4, 17]

Read more

Summary

INTRODUCTION

In the event of network traffic anomaly, the users get to grips with either a drastic slowdown of the service or a complete outage. Recent years have witnessed a rise in the frequency and strength of some illegitimate anomalies known as DDoS attacks These attacks compromise the availability of the web-services of the victim server. The need arises to generate realistic techniques to evaluate the performance and measure the impact of anomalies (legitimate or illegitimate) on the services of the web-server. The network responding to anomalies needs to be tested repetitively with short-duration attack traffic to evaluate the overall performance of the server and the cost involved for installing the required security measures. The paper presents the exhaustive review undertaken to comprehend the concept of performance and quantifying the impact of anomalies on the web-services. The paper defines performance metrics quantifying the quality of service (QoS) of the web server during normal conditions and under the increased traffic load.

LITERATURE REVIEW
PERFORMANCE METRICS
EXPERIMENTAL SET-UP
Network Topology
Generating Traffic Traces
RESULTS AND DISCUSSIONS
Number of Legitimate Requests Dropped
Number of Legitimate Active Connections
Percentage of Failed Transactions
Percentage of Link Utilization
Legitimate Packet Drop Probability
CPU Utilization
CONCLUSIONS
FUTURE SCOPE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.