RelRank: A relevance-based author ranking algorithm for individual publication venues

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

RelRank: A relevance-based author ranking algorithm for individual publication venues

Similar Papers
  • Book Chapter
  • Cite Count Icon 36
  • 10.1007/978-3-642-28493-9_45
Publication Venue Recommendation Using Author Network’s Publication History
  • Jan 1, 2012
  • Hiep Luong + 4 more

Selecting a good conference or journal in which to publish a new article is very important to many researchers and scholars. The choice of publication venue is usually based on the author’s existing knowledge of publication venues in their research domain or the match of the conference topics with their paper content. They may not be aware of new or other more appropriate conference venues to which their paper could be submitted. A traditional way to recommend a conference to a researcher is by analyzing her paper and comparing it to the topics of different conferences using content-based analysis. However, this approach can make errors due to mismatches caused by ambiguity in text comparisons. In this paper, we present a new approach allowing researchers to automatically find appropriate publication venues for their research paper by exploring author’s network of related co-authors and other researchers in the same domain. This work is a part of our social network based recommendation research for research publications venues and interesting hot-topic researches. Experiments with a set of ACM SIG conferences show that our new approach outperforms the content-based approach and provides accurate recommendation. This works also demonstrates the feasibility of our ongoing approach aimed at using social network analysis of researchers and experts in the relevant research domains for a variety of recommendation tasks.Keywordsrecommender systemspublication historykNNmachine learningsocial network analysis

  • PDF Download Icon
  • Supplementary Content
  • Cite Count Icon 2
  • 10.7759/cureus.50308
Measuring the Impact of Data Sharing: From Author-Level Metrics to Quantification of Economic and Non-tangible Benefits
  • Dec 11, 2023
  • Cureus
  • Enzo Emanuele + 1 more

In early 2023, the National Institutes of Health (NIH) implemented its Data Management and Sharing (DMS) Policy, requiring researchers to share scientific data produced with NIH funding. The policy's objective is to amplify the benefits of public investment in research by promoting the dissemination and reusability of primary data. Given this backdrop, identifying a robust methodology to assess the impact of data sharing across diverse research domains is essential. In this review, we adopted two methodological paradigms, the bottom-up and top-down strategies, and employed content analysis to pinpoint established methodologies and innovative practices within this intricate field. Although numerous author-level metrics are available to gauge the impact of data sharing, their application is still limited. Non-traditional metrics, encompassing economic (e.g., cost savings) and intangible benefits, presently appear to hold more potential for evaluating the impact of primary data sharing. Finally, we address the primary obstacles encountered by open data policies and introduce an innovative "Shared model for shared data" framework to bolster data sharing practices and refine evaluation metrics.

  • Research Article
  • Cite Count Icon 1
  • 10.4038/icter.v9i1.7167
Improving citation network scoring by incorporating author and program committee reputation
  • Jul 13, 2016
  • International Journal on Advances in ICT for Emerging Regions (ICTer)
  • Dineshi Peiris + 1 more

Publication venues play an important role in the scholarly communication process. The number of publication venues has been increasing yearly, making it difficult for researchers to determine the most suitable venue for their publication. Most existing methods use citation count as the metric to measure the reputation of publication venues. However, this does not take into account the quality of citations. Therefore, it is vital to have a publication venue quality estimation mechanism. The ultimate goal of this research project is to develop a novel approach for ranking publication venues by considering publication history. The main aim of this research work is to propose a mechanism to identify the key Computer Science journals and conferences from various fields of research. Our approach is completely based on the citation network represented by publications. A modified version of the PageRank algorithm is used to compute the ranking scores for each publication. In our publication ranking method, there are many aspects that contribute to the importance of a publication, including the number of citations, the rating of the citing publications, the time metric and the authors’ reputation. Known publication venue scores have been formulated by using the scores of the publications. New publication venue ranking is taken care by the scores of Program Committee members which derive from their ranking scores as authors. Experimental results show that our publication ranking method reduces the bias against more recent publications, while also providing a more accurate way to determine publication quality.

  • Research Article
  • 10.1515/opis-2025-0035
Author Name Disambiguation in Scholarly Research: A Bibliometric Perspective
  • Jan 23, 2026
  • Open Information Science
  • Hesham Amin Hamdy El Shamly + 1 more

The rapid expansion of scholarly publishing has amplified the long-standing challenge of author name ambiguity in academic databases. This issue, manifesting as homonymy and synonymy, undermines the accuracy of bibliometric analyses, author-level metrics, and research evaluation systems. Author Name Disambiguation (AND) has thus emerged as a critical focus area in digital scholarship, with evolving strategies ranging from supervised machine learning and graph-based models to the adoption of persistent digital identifiers like ORCID. Despite notable advancements, significant challenges remain – particularly in linguistically diverse and underrepresented regions – where metadata inconsistencies, transliteration issues, and limited ORCID adoption exacerbate disambiguation errors. This study presents a comprehensive bibliometric analysis of 2,004 publications on AND from 2005 to 2024, sourced from the Scopus database. Using tools such as Biblioshiny and VOSviewer, the analysis identifies publication trends, leading authors and institutions, core sources, co-authorship networks, and thematic evolution in the field. Findings highlight increasing international collaboration, the dominance of computer science-driven methodologies, and the critical role of metadata quality and institutional frameworks. The study concludes with recommendations for inclusive, multilingual, and interoperable disambiguation systems, advocating for cross-disciplinary collaboration to ensure equitable author identification in global scholarly communication.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1109/access.2021.3052025
Systematic Mapping of Open Data Studies: Classification and Trends From a Technological Perspective
  • Jan 1, 2021
  • IEEE Access
  • Robert Enriquez-Reyes + 5 more

The objective of this paper is to classify and analyse all research on open data performed in the scientific community from a technological viewpoint, providing a detailed exploration based on six key facets: publication venue, impact, subject, domain, life-cycle phases and type of research. This paper therefore provides a consolidated overview of the open data arena that allows readers to identify well-established topics, trends, and open research issues. Additionally, we provide an extensive qualitative discussion of the most interesting findings to pave the way for future research. Our first identification phase resulted in 893 relevant peer-reviewed articles, published between 2006 and 2019 in a wide variety of venues. Analysis of the results shows that open data research grew slowly from 2006 but increased significantly as from 2009. In 2019, research interest in open data from a technological perspective overall decreased. This fact could indicate that research is beginning to stabilise, i.e., the open data research hype is over, and the research field is reaching maturity. Main findings are (i) increasing effort in researching on Semantic Web technologies as a mechanism to publish and reuse linked open data, (ii) software systems are proposed to solve open data technical problems; and (iii) considering technological aspects of legislation and standardization is needed to widely introduce open data in society. Finally, we provide complementary insights regarding open data innovation projects, with special emphasis on publication (e.g., open data portals) and consumption (e.g., open data as business enabler) of open data.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/mysec.2015.7475187
Test case prioritization with textual comparison metrics
  • Dec 1, 2015
  • Rooster Tumeng + 2 more

Regression testing of a large test pool consistently needs a prioritization technique that caters requirements changes. Conventional prioritization techniques cover only the methods to find the ideal ordering of test cases neglecting requirement changes. In this paper, we propose string dissimilarity-based priority assignment to test cases through the combination of classical and non-classical textual comparison metrics and elaborate a prioritization algorithm considering requirement changes. The proposed technique is suitable to be used as a preliminary testing when the information of the entire program is not in possession. We performed evaluation on random permutations and three textual comparison metrics and concluded the findings of the experiment.

  • Research Article
  • 10.11591/csit.v6i2.p202-213
Attack detection in internet of things networks with deep learning using deep transfer learning method
  • Jul 1, 2025
  • Computer Science and Information Technologies
  • Riki Abdillah Hasanuddin + 1 more

Cybersecurity becomes a crucial part within the information management framework of internet of things (IoT) device networks. The large-scale distribution of IoT networks and the complexity of communication protocols used are contributing factors to the widespread vulnerabilities of IoT devices. The implementation of transfer learning models in deep learning can achieve optimal performance faster than traditional machine learning models, as they leverage knowledge from previous models that already understand these features. Base model was built using the 1-dimension convolutional neural network (1D-CNN) method, using training and test data from the source domain dataset. Model 1 was constructed using the same method as base model. The test and training data used for model 1 were from the target domain dataset. This model successfully detected known attacks at a rate of 99.352%, but did not perform well in detecting unknown attacks, with an accuracy of 84.645%. Model 2 is an enhancement of model 1, incorporating transfer learning from the base model. Its results significantly improved compared to model 1 testing. Model 2 has an accuracy and precision rate of 98.86% and 99.17 %, respectively, allowing it to detect previously unknown attacks. Even with a slight decrease in normal detection, most attacks can still be detected.

  • Conference Article
  • Cite Count Icon 2
  • 10.1115/vvs2019-5143
Validation of a Surrogate Model for Marine Mammal Lung Dynamics Under Underwater Explosive Impulse
  • May 15, 2019
  • Emily L Guzas + 5 more

Primary blast injury (PBI), which relates gross blast-related trauma or traces of injury in air-filled tissues or those tissues adjacent to air-filled regions (rupture/lesions, contusions, hemorrhaging), has been documented in a number of marine mammal species after blast exposure [1, 2, 3]. However, very little is known about marine mammal susceptibility to PBI except in rare cases of opportunistic studies. As a result, traditional techniques rely on analyses using small-scale terrestrial mammals as surrogates for large-scale marine mammals. For an In-house Laboratory Independent Research (ILIR) project sponsored by the Office of Naval Research (ONR), researchers at the Naval Undersea Warfare Center, Division Newport (NUWCDIVNPT), have undertaken a broad 3-year effort to integrate computational fluid-structure interaction techniques with marine mammal anatomical structure. The intent is to numerically simulate the dynamic response of a marine mammal thoracic cavity and air-filled lungs to shock loading, to enhance understanding of marine mammal lungs to shock loading in the underwater environment. In the absence of appropriate test data from live marine mammals, a crucial part of this work involves code validation to test data for a suitable surrogate test problem. This research employs a surrogate of an air-filled spherical membrane structure subjected to shock loading as a first order approximation to understanding marine mammal lung response to underwater explosions (UNDEX). This approach incrementally improves upon the currently used one-dimensional spherical air bubble approximation to marine mammal lung response by providing an encapsulating boundary for the air. The encapsulating structure is membranous, with minimal simplified representation not accounting for marine mammal species-specific and individual animal differences in tissue composition, rib mechanics, and mechanical properties of interior lung tissue. NUWCDIVNPT partnered with the Naval Submarine Medical Research Laboratory (NSMRL) to design and execute a set of experiments to investigate the shock response of an air-filled rubber dodgeball in a shallow underwater environment. These tests took place in the 2.13 m (7-ft) diameter pressure tank at the University of Rhode Island, with test measurements including pressure data and digital image correlation (DIC) data captured with high-speed cameras in a stereo setup. The authors developed 3-dimensional computational models of the dodgeball experiments using Dynamic System Mechanics Advanced Simulation (DYSMAS), a Navy fluid-structure interaction code. DYSMAS models of a variety of different problems involving submerged pressure vessel structures responding to hydrostatic and/or UNDEX loading have been validated against test data [4]. Proper validation of fluid structure interaction simulations is quite challenging, requiring measurements in both the fluid and structure domains. This paper details the development of metrics for comparison between test measurements and simulation results, with a discussion of potential sources of uncertainty.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.aap.2018.04.017
Frontal crashworthiness characterisation of a vehicle segment using curve comparison metrics
  • Apr 24, 2018
  • Accident Analysis and Prevention
  • D Abellán-López + 2 more

Frontal crashworthiness characterisation of a vehicle segment using curve comparison metrics

  • Research Article
  • Cite Count Icon 1
  • 10.4018/ijsodit.2012010102
Scholarly Influence Research (SIR)
  • Jan 1, 2011
  • International Journal of Social and Organizational Dynamics in IT
  • Hirotoshi Takeda + 3 more

Following previous research findings, this paper argues that the currently predominant method of evaluating scholar performance - publication counts in “quality” journals - is flawed due to the subjectivity inherent in the generation of the list of approved journals and absence of a definition of quality. Truex, Cuellar, and Takeda (2009) improved on this method by substituting a measurement of “influence” using the Hirsch statistics to measure ideational influence. Since the h-family statistics are a measure of productivity and the uptake of a scholar’s ideas expressed in publications, this methodology privileges the uptake of a scholar’s ideas over the venue of publication. Influence is built through other means than by having one’s papers read and cited. The interaction between scholars resulting in co-authored papers is another way to build scholarly influence. This aspect of scholarly influence, which the authors term social influence, can be assessed by Social Network Analysis (SNA) metrics that examine the nature and strength of coauthoring networks among IS Scholars. The paper demonstrates the method of assessing social influence by analysis of the social network of AMCIS scholars and compares the results of this analysis with other co-authorship networks from the ECIS and ICIS communities.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.3390/make6030073
Evaluation Metrics for Generative Models: An Empirical Study
  • Jul 7, 2024
  • Machine Learning and Knowledge Extraction
  • Eyal Betzalel + 2 more

Generative models such as generative adversarial networks, diffusion models, and variational auto-encoders have become prevalent in recent years. While it is true that these models have shown remarkable results, evaluating their performance is challenging. This issue is of vital importance to push research forward and identify meaningful gains from random noise. Currently, heuristic metrics such as the inception score (IS) and Fréchet inception distance (FID) are the most common evaluation metrics, but what they measure is not entirely clear. Additionally, there are questions regarding how meaningful their score actually is. In this work, we propose a novel evaluation protocol for likelihood-based generative models, based on generating a high-quality synthetic dataset on which we can estimate classical metrics for comparison. This new scheme harnesses the advantages of knowing the underlying likelihood values of the data by measuring the divergence between the model-generated data and the synthetic dataset. Our study shows that while FID and IS correlate with several f-divergences, their ranking of close models can vary considerably, making them problematic when used for fine-grained comparison. We further use this experimental setting to study which evaluation metric best correlates with our probabilistic metrics.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1109/access.2020.2978893
A General Approach to Uniformly Handle Different String Metrics Based on Heterogeneous Alphabets
  • Jan 1, 2020
  • IEEE Access
  • Francesco Cauteruccio + 4 more

In the last few years, we have assisted in a great increase of the usage of strings in the most disparate areas. In the meantime, the development of the Internet has brought the necessity of managing strings from very different contexts and possibly using different alphabets. This issue is not addressed by the numerous string comparison metrics previously proposed in the literature. In this paper, we aim at providing a contribution in this context. In fact, first we propose an approach to measure the similarity of strings based on different alphabets. Then we show that our approach can be specifically adapted to several classic string comparison metrics and that each specialization can lead to addressing completely different issues.

  • Research Article
  • Cite Count Icon 14
  • 10.1109/access.2022.3232288
Generalization of Relative Change in a Centrality Measure to Identify Vital Nodes in Complex Networks
  • Jan 1, 2023
  • IEEE Access
  • Koduru Hajarathaiah + 4 more

Identifying vital nodes is important in disease research, spreading rumors, viral marketing, and drug development. The vital nodes in any network are used to spread information as widely as possible. Centrality measures such as Degree centrality (D), Betweenness centrality (B), Closeness centrality (C), Katz (K), Cluster coefficient (CC), PR (PageRank), LGC (Local and Global Centrality), ISC (Isolating Centrality) centrality measures can be used to effectively quantify vital nodes. The majority of these centrality measures are defined in the literature and are based on a network’s local and/or global structure. However, these measures are time-consuming and inefficient for large-scale networks. Also, these measures cannot study the effect of removal of vital nodes in resource-constrained networks. To address these concerns, we propose the six new centrality measures namely GRACC, LRACC, GRAD, LRAD, GRAK, and LRAK. We develop these measures based on the relative change of the clustering coefficient, degree, and Katz centralities after the removal of a vertex. Next, we compare the proposed centrality measures with D, B, C, CC, K, PR, LGC, and ISC to demonstrate their efficiency and time complexity. We utilize the SIR (Susceptible-Infected-Recovered) and IC (Independent Cascade) models to study the maximum information spread of proposed measures over conventional ones. We perform extensive simulations on large-scale real-world data sets and prove that local centrality measures perform better in some networks than global measures in terms of time complexity and information spread. Further, we also observe the number of cliques drastically improves the efficiency of global centrality measures.

  • Research Article
  • Cite Count Icon 7
  • 10.5555/1937055.1937062
Topic-driven multi-type citation network analysis
  • Apr 28, 2010
  • Zaihan Yang + 2 more

In every scientific field, automated citation analysis enables the estimation of importance or reputation of publications and authors. In this paper, we focus on the task of ranking authors. Although previous work has used content-based approaches or citation network link analyses, the combination of the two with topical link analyses is unexplored. Moreover, previous citation analysis applications are typically limited to a graph based on author citations, or a bipartite graph based on author and paper citations. We present in this paper a novel integrated probabilistic model which combines a content-based approach with a multi-type citation network which integrates citations among papers, authors, affiliations and publishing venues in a single model. We further introduce the application of Topical PageRank into citation network link analysis due to the fact that researchers may be experts in different scientific domains. Finally, we describe a heterogenous link analysis of the citation network, exploring the impact of weighting various factors. Comparative experimental results based on data extracted from the ACM digital library show that 1) the multi-type citation graph works better than citation graphs integrating fewer types of entities, 2) the use of Topical PageRank can further improve performance, and 3) Heterogenous PageRank with parameter tuning can work even better than Topical PageRank.

  • Dissertation
  • 10.53846/goediss-8876
User identification and community exploration via mining big personal data in online platforms
  • Feb 21, 2022
  • Jiaquan Zhang

User identification and community exploration via mining big personal data in online platforms

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.