Vulnerability Testing and Analysis on Websites and Web-Based Applications in the XYZ Faculty Environment Using Acunetix Vulnerability

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The internet's continuous evolution has profoundly impacted society through the advancement of website technology and applications, reshaping contemporary ways of life. These digital platforms offer unrestricted information access, overcoming spatial and temporal limitations. In the realm of software development, Vulnerability Assessment is essential for producing high-quality products, as seemingly minor errors can create dangerous vulnerabilities that malicious actors may exploit to pilfer information from websites or applications. This study examines the security level of the Integrated website and application within the Faculty of Medicine, Universitas Andalas (Fakultas XYZ) environment, utilizing the Acunetix Web Vulnerability Scanner tool. The initial scan revealed a threat level of 3 (high) for the Fakultas XYZ website and level 2 (medium) for the Integrated application. Following a recapitulation process, several web alerts were identified for optimization, including Cross-Site Scripting (XSS), Blind SQL Injection, Application error message, HTML form without CSRF protection, Development configuration file, Directory listing, Error message on page, and User credentials sent in clear text. The optimization process involved source code review and enhancement to improve website features. A subsequent scan post-optimization demonstrated a reduction in threat levels for both the website and the UNAND FK Symphony application, with both achieving threat level 1 (low).

Similar Papers
  • Research Article
  • 10.21460/jutei.v3i1.147
HTML FORM GENERATOR BASED ON TABLE USING OBJECT ORIENTED PROGRAMMING
  • Jun 27, 2019
  • Jurnal Terapan Teknologi Informasi
  • Katon Wijana

To insert new data into a database table using a web-based application, a graphical user interface in the form of HTML Form is required. Each table field / attribute requires an appropriate form control in order to minimize data errors that will be entered. There is a relation between the data type of a field in the table with the type of form control to be used, therefore the graphical user interface in the form of HTML Form can be created automatically. There are various control forms of HTML in the form of tags, generally in the form of input tags. What distinguishes the form control from one to another is the attribute: type, size, value therefore to determine the type and content of form controls can be given through parameters. HTML Form can be regarded as an object which has many other objects in the form of form controls. Object Oriented Programming (OOP) paradigm it can be implemented to build HTML Form. Through meta data from a table, it will be able to obtain the appropriate HTML form control, but for each specific data type it can have the appropriate form control candidate, therefore before Form HTML is created by the generator, there should be a little user intervention to get the interface The desired HTML form.

  • Research Article
  • Cite Count Icon 1
  • 10.21460/jutei.2019.31.147
HTML FORM GENERATOR BASED ON TABLE USING OBJECT ORIENTED PROGRAMMING
  • Jun 27, 2019
  • Jurnal Terapan Teknologi Informasi
  • Katon Wijana

To insert new data into a database table using a web-based application, a graphical user interface in the form of HTML Form is required. Each table field / attribute requires an appropriate form control in order to minimize data errors that will be entered. There is a relation between the data type of a field in the table with the type of form control to be used, therefore the graphical user interface in the form of HTML Form can be created automatically. There are various control forms of HTML in the form of tags, generally in the form of input tags. What distinguishes the form control from one to another is the attribute: type, size, value therefore to determine the type and content of form controls can be given through parameters. HTML Form can be regarded as an object which has many other objects in the form of form controls. Object Oriented Programming (OOP) paradigm it can be implemented to build HTML Form. Through meta data from a table, it will be able to obtain the appropriate HTML form control, but for each specific data type it can have the appropriate form control candidate, therefore before Form HTML is created by the generator, there should be a little user intervention to get the interface The desired HTML form.

  • Research Article
  • 10.35314/gdaky847
Vulnerability Analysis on Semarang City Road Section Information System Website Using VAPT Method
  • Jul 1, 2025
  • INOVTEK Polbeng - Seri Informatika
  • Hanif Setia Nusantara + 3 more

Web-based public service applications in the digital governance era are increasingly vulnerable to cyber threats. This study analyzes the vulnerability of the Semarang City Road Information System website quantitatively using the Vulnerability Assessment and Penetration Testing (VAPT) method to evaluate its effectiveness in identifying security gaps. This system is part of an e-government service providing road infrastructure information but, like other technology-based systems, is susceptible to exploitation. The VAPT method used includes two main stages: Vulnerability Assessment to identify weaknesses and Penetration Testing to simulate attacks. The study identified 5 potential vulnerabilities: SQL Injection, Credit Card Number Disclosure, Insecure Direct Object Reference (IDOR), Cross-Site Scripting (XSS), and Error Message on Page. However, 80% of these were false positives, effectively filtered by Alibaba Cloud’s Web Application Firewall (WAF). The IDOR vulnerability was confirmed as valid, allowing unauthorized access to sensitive data through manipulation of the ID parameter in the URL. The original contribution of this research is the specific recommendation for implementing Indirect Object References mechanisms such as ID encryption, as well as emphasizing the need for comprehensive routine testing to improve security and prevent potential data misuse.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-319-13461-1_43
A Survey on Conducting Vulnerability Assessment in Web-Based Application
  • Jan 1, 2014
  • Nor Fatimah Awang + 2 more

Many organizations have changed their traditional systems to web-based applications to make more profit and at the same time to increase the efficiency of their activities such as customer support services and data transactions. However web-based applications have become a major target for attackers due to some common vulnerability exists in the application. Assessing the level of information security in a web-based application is a serious challenge for many organizations. One of the important steps to ensure the security of web application is conducting vulnerability assessment periodically. Vulnerability assessment is a process to search for any potential loopholes or vulnerability contain in a system. Most of the current efforts in assessments are involve searching for known vulnerabilities that commonly exist in web-based application. The process of conducting vulnerability assessment can be improved by understanding the functionality of the application and characteristics of the nature vulnerabilities. In this paper, we perform an empirical study on how to do vulnerability assessment with the aim of understanding how the functionality, vulnerabilities and activities that would benefit for the assessment processes from the perspective of application security.

  • Research Article
  • 10.30871/jaic.v9i2.9069
Enhancing Website Security Using Vulnerability Assessment and Penetration Testing (VAPT) Based on OWASP Top Ten
  • Mar 25, 2025
  • Journal of Applied Informatics and Computing
  • Diana Rohmaniah + 3 more

Website security is one of the main concerns in the digital era, given the increasing potential for cyber threats. This research aims to improve website security by using the Vulnerability Assessment and Penetration Testing (VAPT) method that refers to the OWASP Top Ten standard. The applied method includes four main stages: information gathering, vulnerability scanning, exploitation, and reporting. The results showed that there were several successfully exploited vulnerabilities, such as Clickjacking, Improper HTTP to HTTPS Redirection, Directory Listing, and Sensitive Information Disclosure, which were classified based on the OWASP Top Ten. The severity of the vulnerabilities was analyzed using Common Vulnerabilities and Exposures (CVE), Common Weakness Enumeration (CWE), and Common Vulnerability Scoring System (CVSS). The analysis results show that some vulnerabilities have high severity after considering the factual conditions of the system. This research provides specific remediation recommendations to address these vulnerabilities, such as the implementation of security headers, deletion of sensitive configuration files, and dependency updates. With this approach, the research is expected to contribute to improving website security and provide effective mitigation guidelines.

  • Book Chapter
  • Cite Count Icon 16
  • 10.1007/978-3-540-72912-9_13
Vulnerability Analysis of Web-based Applications
  • Jan 1, 2007
  • Marco Cova + 2 more

In the last few years, the popularity of web-based applications has grown tremendously. A number of factors have led an increasing number of organizations and individuals to rely on web-based applications to provide access to a variety of services. Today, web-based applications are routinely used in security-critical environments, such as medical, financial, and military systems.Web-based systems are a composition of infrastructure components, such as web servers and databases, and of application-specific code, such as HTML-embedded scripts and server-side CGI programs. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-based applications are deployed and made available to the whole Internet, creating easily exploitable entry points for the compromise of entire networks.To ameliorate these security problems, it is necessary to develop tools and techniques to improve the security of web-based applications. The most effective approach would be to provide secure mechanisms that can be used by well-trained developers. Unfortunately, this is not always possible, and a second line of defense is represented by auditing the application code for possible security problems. This activity, often referred to as web vulnerability analysis, allows one to identify security problems in web-based applications at early stages of development and deployment.Recently, a number of methodologies and tools have been proposed to support the assessment of the security of web-based applications. In this chapter, we survey the current approaches to web vulnerability analysis and we propose a classification along two characterizing axes: detection model and analysis technique. We also present the most common attacks against web-based applications and discuss the effectiveness of certain analysis techniques in identifying specific classes of flaws.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icws.2004.126
Web services for information extraction from the Web
  • Jun 6, 2004
  • Benjamin Habegger + 1 more

Extracting information from the Web is a complex task with different components which can either be generic or specific to the task, going from downloading a given page, following links, querying a Web-based applications via an HTML form and the HTTP protocol, querying a Web service via the SOAP protocol, etc. Therefore building Web services which proceed to executing an information tasks can not be simply hard coded (i.e. written and compiled once and for all in a given programming language). In order to be able to build flexible information extraction Web Services we need to be able to compose different sub tasks together. We propose a, XML-based language to describe information extraction Web services as the compositions of existing Web services and specific functions. The usefulness the proposed framework is demonstrated by three real world applications. (1) Search engines: we show how to describe a task which queries Google's Web service, retrieves more information on the results by querying their respective HTTP servers, and filters them according to this information. (2) E-commerce sites : an information extraction Web service giving access to an existing HTML-based e-commerce online application such as Amazon is built. (3) Patent extraction: a last example shows how to describe an information extraction Web service which allows to query a Web-based application, extract the set of result links, follow them, and extract the needed information on the result pages. In all three applications the generated description can be easily modified and completed to further respond the user's needs and create value-added Web services.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-319-23276-8_14
Automated Security Testing Framework for Detecting SQL Injection Vulnerability in Web Application
  • Jan 1, 2015
  • Nor Fatimah Awang + 1 more

Today almost all organizations have changed their traditional systems and have improved their performance using web-based applications. This process will make more profit and at the same time will increase the efficiency of their activities through customer support services and data transactions. Usually, web application take inputs from users through web form and send this input to get the response from database. Modern web-based application use web database to store all critical information such as user credentials, financial and payment information, company statistics etc. However error in validation of user input can cause database vulnerable to Structured Query Language Injection (SQLI) attack. By using SQLI attack, the attackers might insert malicious code in the user input and trying to gain access to the confidential and sensitive data from database. Security tester need to identify the appropriate test cases before starting exploiting SQL vulnerability in web-based application during testing phase. Identifying the test cases of a web application and analyzing the test results of an attack are important parts and consider as critical issues that affects the effectiveness of security testing. Thus, this research focused on the developing a framework for testing and detecting SQL injection vulnerability in web application. In this research, test cases will be generated automatically based on SQLI attack pattern and then the results will be executed automatically based on generated test cases. The primary focus in this paper is to develop a framework to automate security testing based on input injection attack pattern. To test our framework, we install a vulnerable web application and test result shows that the proposed framework can detect SQLI vulnerability successfully.

  • Research Article
  • Cite Count Icon 5
  • 10.1109/temc.2022.3169820
A Bayesian Estimation of Confidence Limits for Multi-state System Vulnerability Assessment With IEMI
  • Aug 1, 2022
  • IEEE Transactions on Electromagnetic Compatibility
  • Yu Liu + 6 more

A Bayesian approach based on the vulnerability distribution is proposed to estimate the confidence limits of the state probability and the threat level of multistate electronic systems interfered by intentional electromagnetic interference (IEMI). The vulnerability distribution is used to describe the state probability function of the multi-state system (MSS) for a given IEMI threat level. When a small number of test samples are under a specific threat level and prior MSS information is known, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">posterior</i> estimation of the state probability function can be obtained based on Bayesian theory. Furthermore, to effectively apply the state probability function to assess the vulnerability of the MSS, the uncertainty of the state probability function is discussed. Considering state probabilities under the same threat level as random variables, all these state probabilities also follow the Dirichlet distribution. Thus, the confidence limits of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">posteriori</i> state probability and the threat level can be inferred by the marginal distribution of the state probability, which is induced by combining the Dirichlet and vulnerability distributions. Finally, a case study is given to demonstrate the details of the calculation process of the vulnerability distribution, the state probability function, and the confidence limits under a lognormal distribution.

  • Research Article
  • Cite Count Icon 8
  • 10.18517/ijaseit.10.5.8862
Vulnerability Assessment and Penetration Testing (VAPT) Framework: Case Study of Government’s Website
  • Oct 15, 2020
  • International Journal on Advanced Science, Engineering and Information Technology
  • Ahmad Almaarif + 1 more

Information security often neglected by individual or employee or even by the enterprise, with there is no proper strategy to raise awareness, promote consistency and maintain performance regarding protect sensitive, confidential, and critical data. One of the common techniques used is a vulnerability assessment and penetration testing (VAPT) to assure the security strategy has been implemented into the computer system by analyzing both its strength and weakness. SQL plays an essential role in the Relation Database Management System (RDBMS) and its relationship to the existence of a website and its flexible operation because of its simplicity and integrity. To anticipate these types of threats or other Internet attacks, a goal-oriented penetration test that has a framework is recommended to identify specific types of vulnerabilities that lead to business concessions and to avoid the risks that adversely affect the enterprise Thus. This study conducts VAPT to uncover the possibility of threats and evaluate the potential impact to be reported to the system owner through a proper engagement framework that allows systematic measurement. Government websites have been identified for this purpose of the research to show the current trend that occurred in cyber communities, especially in Indonesia. This study has found various vulnerabilities lies in the directory listing, full path disclosure, PHP info disclosure, folder webserver disclosure, and other potential threats, which present 2 (two) critical, 6 (six) medium, and 2 (two) low level of risk.

  • Research Article
  • Cite Count Icon 18
  • 10.1093/bioinformatics/btp081
OnTheFly: a tool for automated document-based text annotation, data linking and network generation
  • Feb 17, 2009
  • Bioinformatics
  • Georgios A Pavlopoulos + 4 more

OnTheFly is a web-based application that applies biological named entity recognition to enrich Microsoft Office, PDF and plain text documents. The input files are converted into the HTML format and then sent to the Reflect tagging server, which highlights biological entity names like genes, proteins and chemicals, and attaches to them JavaScript code to invoke a summary pop-up window. The window provides an overview of relevant information about the entity, such as a protein description, the domain composition, a link to the 3D structure and links to other relevant online resources. OnTheFly is also able to extract the bioentities mentioned in a set of files and to produce a graphical representation of the networks of the known and predicted associations of these entities by retrieving the information from the STITCH database.Availability: http://onthefly.embl.de, http://onthefly.embl.de/FAQ.htmlContact: pavlopou@embl.deSupplementary information: Supplementary data are available at Bioinformatics online.

  • Research Article
  • Cite Count Icon 103
  • 10.1109/tts.2021.3066254
Face Morphing Attack Generation and Detection: A Comprehensive Survey
  • Sep 1, 2021
  • IEEE Transactions on Technology and Society
  • Sushma Venkatesh + 3 more

Face recognition has been successfully deployed in real-time applications, including secure applications such as border control. The vulnerability of face recognition systems (FRSs) to various kinds of attacks (both direct and indirect attacks) and face morphing attacks has received great interest from the biometric community. The goal of a morphing attack is to subvert an FRS at an automatic border control (ABC) gate by presenting an electronic machine-readable travel document (eMRTD) or e-passport that is obtained based on a morphed face image. Since the application process for an e-passport in the majority of countries requires a passport photograph to be presented by the applicant, a malicious actor and an accomplice can generate a morphed face image to obtain the e-passport. An e-passport with a morphed face image can be used by both the malicious actor and the accomplice to cross a border, as the morphed face image can be verified against both of them. This can result in a significant threat, as a malicious actor can cross the border without revealing the trace of his/her criminal background, while the details of the accomplice are recorded in the log of the access control system. This survey aims to present a systematic overview of the progress made in the area of face morphing in terms of both morph generation and morph detection. In this article, we describe and illustrate various aspects of face morphing attacks, including different techniques for generating morphed face images and state-of-the-art morph attack detection (MAD) algorithms based on a stringent taxonomy as well as the availability of public databases, which allow us to benchmark new MAD algorithms in a reproducible manner. The outcomes of competitions and benchmarking, vulnerability assessments, and performance evaluation metrics are also provided in a comprehensive manner. Furthermore, we discuss the open challenges and potential future areas that need to be addressed in the evolving field of biometrics.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.2478/environ-2023-0008
Ecosystem services, vulnerability and threat levels of Ramsar wetlands in the complex of Aurès Sbkhates, North-Eastern Algeria
  • Jun 1, 2023
  • Environmental &amp; Socio-economic Studies
  • Saida Bougoffa + 3 more

A socio-economic study were carried out in the wetlands complex of Aurès Sebkhates, in North Eastern Algeria. This study aimed to identify the ecosystem services obtained by local stakeholders, describe the anthropogenic impacts and evaluate the vulnerability and threat levels of three Ramsar wetlands: Garaet Timerganine (freshwater), Garaet Annk Djemel &amp; El Merhsel (brackish water) and Sebkhet Ezzmoul (salt water). A socio-economic survey was conducted of 70 randomly selected households (social group) and 24 people belonging to the local administration (focus group). Vulnerability and threat levels were analyzed. Provisioning and monetary value are the most relevant ecosystem services (water pumping, grazing, agriculture, area for recreation, plant and egg collection, salt mining). Indirect ecosystem services rendered by the studied wetlands (water treatment/flood control) are only known by the focus group. 95% of the surveyed locals believed that the studied wetlands have experienced significant degradation in recent years mainly by human activity. Our results revealed significant threats due to salt mining and the excessive water pumping practiced within the three sites. Natural stressors such as drying out, erosion and siltation also contribute to the disturbance of these wetlands. Analysis of vulnerability (Vt) and threat (T) indices revealed that Garaet Timerganine is highly vulnerable (Vt= 1.48; T = 17.16), Ezzmoul is moderately vulnerable (Vt = 0.23; T = 2.3) and Annk Djemel &amp; El Merhsel are weakly vulnerable (Vt = 0.04; T = 0.28). This study highlighted the most vulnerable wetlands in order to prioritize them and to build a strategy for conservation and their wise use.

  • Conference Article
  • Cite Count Icon 14
  • 10.1109/nss.2010.55
Information-Theoretic Detection of Masquerade Mimicry Attacks
  • Sep 1, 2010
  • Juan E Tapiador + 1 more

In a masquerade attack, an adversary who has stolen a legitimate user's credentials attempts to impersonate him to carry out malicious actions. Automatic detection of such attacks is often undertaken constructing models of normal behaviour of each user and then measuring significant departures from them. One potential vulnerability of this approach is that anomaly detection algorithms are generally susceptible of being deceived. In this paper, we first investigate how a resourceful masquerader can successfully evade detection while still accomplishing his goals. We then propose an algorithm based on the Kullback-Leibler divergence which attempts to identify if a sufficiently anomalous attack is present within an apparently normal request. Our experimental results indicate that the proposed scheme achieves considerably better detection quality than adversarial-unaware approaches.

  • Dissertation
  • 10.24377/ljmu.t.00008897
INTRUSION PREDICTION SYSTEM FOR CLOUD COMPUTING AND NETWORK BASED SYSTEMS
  • Jul 11, 2018
  • Mohamed Abdlhamed

Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.