VICTOR—A new cyberinfrastructure for volcanology
We introduce the Volcanology Infrastructure for Computational Tools and Resources (VICTOR), a cloud-based cyberinfrastructure designed to modernize computational workflows and data access in volcanology. Built around a scalable JupyterHub environment, VICTOR provides users with an array of pre-installed modeling tools, remote sensing data access workflows, geochemical calculators, and the pyVICTOR utility library for geospatial and visualization tasks. The platform drives educational efforts through courses, modular teaching materials, and multilingual documentation. VICTOR promotes open science by making tools findable, accessible, interoperable, and reproducible (FAIR) and enables innovative workflows including multi-model intercomparisons and inversion schemes. We describe its architecture, current tool suite, community engagement activities, and plans for model coupling, machine learning integration, and expanded observatory support. VICTOR exemplifies a community-driven approach to infrastructure that empowers researchers, educators, and stakeholders in volcanic hazard science.
- Research Article
- 10.36615/wfx19633
- May 6, 2025
- Communicare: Journal for Communication Studies in Africa
The arms race in generative artificial intelligence and artificial intelligence has transformed digital markets, with artificial-intelligence-powered platforms projected to drive market growth to nearly $740 billion by 2030. However, scholarly understanding of how these technologies affect platform competition remains limited. This article explores how generative pre-trained transformers influence digital superintermediaries’ market power and examines whether generative artificial intelligence capabilities reinforce or challenge existing platform dominance. Using a conceptual literature analysis of platforms and artificial intelligence development patterns, the research uncovers a critical paradox: While generative pre-trained transformers represent revolutionary advancement, they create novel forms of artificial intelligence market concentration. The findings reveal how digital superintermediaries leverage artificial-intelligence-powered platforms through control over computational resources and data access, creating self-reinforcing cycles of artificial intelligence capability enhancement. This research demonstrates how artificial intelligence capabilities, particularly generative pre-trained transformers, create new mechanisms of market power consolidation, suggesting the need for innovative regulatory approaches that address the unique characteristics of generative artificial-intelligence-enhanced digital multisided platforms.
- Research Article
- 10.47172/2965-730x.sdgsreview.v4.n04.pe04291
- Dec 31, 2024
- Journal of Lifestyle and SDGs Review
Objective: This research examines the implementation of land registration in Indonesia to provide legal certainty over land ownership. This research aims to analyze the factors that influence the land registration process and its impact on legal protection for landowners. Theoretical Framework: This research is well grounded in the principles of agrarian law, including Law No. 5/1960 on Basic Agrarian Principles and other relevant regulations. The main theories used include legal certainty, administrative efficiency, and protection of land rights, which form the basis for understanding the challenges and implications of land registration. Method: This research took a qualitative approach with a case study method. Data were collected through in-depth interviews with National Land Agency officers, landowners, and agrarian law experts, as well as document analysis. Data were analyzed using thematic methods to identify relevant patterns and relationships. Results and Discussion: There exists significant obstacles in the land registration process, such as low public awareness, complex bureaucracy, and often occurring land disputes due to overlapping claims. This paper contextualizes these challenges within a legal framework, emphasizing the importance of public education, simplifying procedures, and strengthening enforcement mechanisms. Limitations such as data access and procedural inefficiencies are also discussed. Research Implications: The research offers both practical and theoretical insights, while highlighting the need for better land administration systems, more active community engagement and a strong regulatory framework. These recommendations are intended to improve legal certainty and reduce land-related conflicts, benefiting sectors such as real estate and rural development. Originality/Value: This research contributes to the literature by providing a deeper comprehension of the complexities of land registration in Indonesia. Its value resides in the practical solutions offered for policymakers and stakeholders to improve the effectiveness and fairness of the land registration system, as well as promoting social and the economic stability.
- Research Article
3
- 10.1109/access.2021.3122818
- Jan 1, 2021
- IEEE Access
Processing-in-memory (PIM) architectures show the advantage of handling applications that generate complicated memory request patterns; usually, those kinds of memory streams degrade the application’s performance in conventional memory hierarchy systems. In particular, deep convolutional neural networks (DCNNs) processing that consists of several functionalities could be highly optimized if PIM cores can extend the processing capability and data accessibility. In this work, we propose a functionality-based PIM accelerator for DCNNs. We design several modules in addition to the conventional PIM system based on a hybrid memory cube (HMC). First, we compose a new buffer module, namely, a shared cache, in which PIM cores are provided DCNN functionalities and pre-trained weights. The PIM cores subsequently enhance computational utilization and data accessibility. Second, an efficient replacement method complements the shared cache to optimize the data miss rate of DCNN processing. Third, we compose dual prefetchers that can deal with DCNN’s memory access patterns, thereby reducing the system’s overall latency. Fourth, we compose a PIM scheduler for PIM core-level autonomous request control. The PIM scheduler relieves the host processor of significant computational loads, achieving the overall latency of the system and reducing the energy consumption. By the performance evaluation based on the trace-driven HMC simulator, our proposed model improves average latency and bandwidth by 38.9 and 27.9 % with only 18.7 % more energy consumption compared with conventional HMC-based PIM systems. Our system also achieves scalable processing performance because when the DCNN becomes deeper, it processes faster than conventional PIM systems.
- Research Article
- 10.21541/apjess.1705042
- Jan 31, 2026
- Academic Platform Journal of Engineering and Smart Systems
This paper presents a comprehensive synthesis of major breakthroughs in artificial intelligence (AI) over the past fifteen years, integrating historical, theoretical, and technological perspectives. It identifies key inflection points in AI’s evolution by tracing the convergence of computational resources, data access, and algorithmic innovation. The analysis highlights how researchers enabled GPU-based model training, triggered a data-centric shift with ImageNet, simplified architectures through the Transformer, and expanded modeling capabilities with the GPT series. Rather than treating these advances as isolated milestones, the paper frames them as indicators of deeper paradigm shifts. By applying concepts from statistical learning theory such as sample complexity and data efficiency, the paper explains how researchers translated breakthroughs into scalable solutions and why the field must now embrace data-centric approaches. In response to rising privacy concerns and tightening regulations, the paper evaluates emerging solutions like federated learning, privacy-enhancing technologies (PETs), and the data site paradigm, which reframe data access and security. In cases where real-world data remains inaccessible, the paper also assesses the utility and constraints of mock and synthetic data generation. By aligning technical insights with evolving data infrastructure, this study offers strategic guidance for future AI research and policy development.
- Book Chapter
1
- 10.1007/978-0-387-78448-9_18
- Jan 1, 2008
Besides computation intensive tasks, the Grid also facilitates sharing and processing very large databases and file systems that are distributed over multiple resources and administrative domains. Although accessing data in the Grid is supported by various lower level tools, end-users find it difficult to utilise these solutions directly. High level environments, such as Grid portal and workflow solutions provide little or no support for data access and manipulation. Workflow systems are widely utilised in Grid computing to automate computational tasks. Unfortunately, the ways of feeding data into these workflows is limited and in most cases requires additional tools and manual intervention. This paper describes how data can be fed into computational workflows from heterogeneous data sources. The P-GRADE Grid portal and workflow engine have been integrated with the SDSC Storage Resource Broker (SRB) in order to access SRB data resources as inputs and outputs of workflow components. The solution automates data interaction in computational workflows allowing users to seamlessly access and process data stored in SRB resources. The implemented solution also enables the seamless interoperation of SRB, SRM (Storage Resource Manager) and GridFTP file catalogues.
- Research Article
6
- 10.1007/s00521-018-3667-y
- Aug 12, 2018
- Neural Computing and Applications
Robust object tracking is a challenging task in multimedia understanding and computer vision. The traditional tracking algorithms only use the forward tracking information while neglecting the inverse information. An inverse relocation strategy is used to learn the translation and scale filters in the proposed tracking algorithm. To begin with, we learn a translation filter using both the forward and the inverse tracking information based on the ridge regression. The object position can be attained using the translation filter by the inverse relocation strategy. Secondly, the scale filter can be attained using the ridge regression and a smooth strategy is adopted to integrate the forward and inverse scale factors. Experiments are performed on the scale variation dataset and the OTB-50 dataset. Extensive experimental results show that the proposed algorithm performs favorably against several state-of-the-art methods in terms of precision and success rate. Meanwhile, the proposed algorithm is also robustness to the deformation to a great extent.
- Research Article
11
- 10.1145/3570927
- Jun 20, 2023
- ACM Transactions on Reconfigurable Technology and Systems
Super-resolution (SR) based on deep learning has obtained superior performance in image reconstruction. Recently, various algorithm efforts have been committed to improving image reconstruction quality and speed. However, the inference of SR contains huge amounts of computation and data access, leading to low hardware implementation efficiency. For instance, the up-sampling with the deconvolution process requires considerable computation resources. In addition, the sizes of output feature maps of several middle layers are extraordinarily large, which is challenging to optimize, causing serious data access issues. In this work, we present an all-on-chip hardware architecture based on the deconvolution scheme and feature map segmentation strategy, namely ADAS, where all the generated data by the middle layers are buffered on-chip to avoid large data movements between on- and off-chip. In ADAS, we develop a hardware-friendly and efficient deconvolution scheme to accelerate the computation. Also, the dynamic reconfigurable process element (PE) combined with efficient mapping is proposed to enhance PE utilization up to nearly 100% and support multiple scaling factors. Based on our experimental results, ADAS demonstrates real-time image SR and better image reconstruction quality with PSNR (37.15 dB ) and SSIM (0.9587). Compared to baseline and validated with the FPGA platform, ADAS can support scaling factors of 2, 3, and 4, achieving 2.68 ×, 5.02 ×, and 8.28 × speedup.
- Conference Article
2
- 10.1109/smartworld-uic-atc-scalcom-iop-sci.2019.00282
- Aug 1, 2019
Genome variant analysis is performed on Variant Call Format (VCF) files. It can take days to process these files for genome analytics due to challenges such as loading the files for each user query and processing them to answer questions of interest. As data sizes grow, timely processing of this data is putting enormous pressure on the computational resources, leading to significant processing delays and may jeopardise the ultimate goal of bringing the genomic discoveries to masses. We believe this problem will not be solved until the underlying data structure to organise and process these files undergoes a transformation. To overcome this problem, we have proposed a graph based system to represent the data in VCF files. This allows the data to be loaded once in a graph model which is then subsequently queried and processed numerous times without any additional computational and data access penalties. This helps reduce data access time by giving a constant time access to any node and addresses performance and scalability challenges that have been a limiting factor for the mass scale adoption of genome analytics. It takes only 2ms to access any data node in our graph model and remains constant for any number of nodes.
- Conference Article
4
- 10.1109/iros40897.2019.8968562
- Nov 1, 2019
Deep networks have brought significant advances in robot perception, enabling to improve the capabilities of robots in several visual tasks, ranging from object detection and recognition to pose estimation, semantic scene segmentation and many others. Still, most approaches typically address visual tasks in isolation, resulting in overspecialized models which achieve strong performances in specific applications but work poorly in other (often related) tasks. This is clearly sub-optimal for a robot which is often required to perform simultaneously multiple visual recognition tasks in order to properly act and interact with the environment. This problem is exacerbated by the limited computational and memory resources typically available onboard to a robotic platform. The problem of learning flexible models which can handle multiple tasks in a lightweight manner has recently gained attention in the computer vision community and benchmarks supporting this research have been proposed. In this work we study this problem in the robot vision context, proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art algorithms in this novel challenging scenario. We also define a new evaluation protocol, better suited to the robot vision setting. Results shed light on the strengths and weaknesses of existing approaches and on open issues, suggesting directions for future research.
- Research Article
54
- 10.1109/jiot.2022.3179000
- Nov 1, 2022
- IEEE Internet of Things Journal
Edge computing is an indispensable technology that overcomes delay limitations of cloud computing. In edge computing, computational resources are deployed at the network edge, and computational tasks and data of end terminals can be efficiently processed by edge nodes. Considering the computational resource limitations of edge nodes, collaborative edge computing integrates computational resources of edge nodes and provides more efficient computing services for end terminals. This article considers a computation offloading problem in collaborative edge computing networks, where computation offloading and resource allocation are optimized by means of a collaborative load shedding approach: a terminal can offload a computing task to an edge node, which either can process the task with its computing resource or further offload the task to other edge nodes. Long-term objectives and long-term constraints are considered, and Lyapunov optimization is applied to convert the original nonconvex computation offloading problem into a second problem that approximate the original problem and it is still nonconvex but has a special structure, which gives rise to a new distributed algorithm that optimally solves the second problem. Finally, the performance and provable bound of the distributed algorithm is theoretically analyzed. Numerical results demonstrate that the distributed algorithm can achieve a guaranteed long-term performance, and also demonstrate the improvement in performance achieved over the case of computation offloading without collaborating edge nodes.
- Book Chapter
22
- 10.70593/978-81-981271-4-3_10
- Oct 13, 2024
The machine learning (ML) and deep learning (DL) field is quickly progressing due to improvements in computational power, data access, and algorithmic advancements. Recent developments indicate a significant change toward models that are more effective, adaptable, and easy to understand. Federated learning and edge computing are becoming more popular, allowing for decentralized data processing and improved privacy. Transformer architectures, originally made popular in natural language processing (NLP), are now being utilized in various applications, showing better effectiveness in image and time-series analysis. Moreover, the combination of quantum computing and ML offers the potential for exponential enhancements, which could lead to the resolution of problems that were previously unsolvable. Explainable AI (XAI) is becoming increasingly important, as it tackles the opaque characteristics of DL models, fostering confidence, and guaranteeing adherence to ethical guidelines. Moreover, the integration of ML with new technologies like Internet of Things (IoT), blockchain, and 5G is opening doors for creative uses in smart cities, healthcare, and autonomous systems. Researchers are investigating the use of hybrid models that combine symbolic AI with neural networks to improve reasoning abilities. The advancements in ML and DL architectures have the potential to tackle complex global issues and foster technological innovation at an unprecedented level, signaling a major step towards smarter and independent systems.
- Book Chapter
- 10.1007/978-1-4614-0508-5_4
- Dec 5, 2011
The chapter describes the experience and lessons learnt during customization of a seismic early warning system for the grid technology. Our goal is to shorten the workflow of an experiment, so that final users have direct access to data sources, i.e. seismic sensors, without intermediaries and without leaving the environment employed for the analysis. We strongly rely on remote instrumentation capabilities of the grid, a feature that makes this platform very attractive for scientific communities aiming at blending computational procedures and data access in a single tool. The expected outcome should be a distributed virtual laboratory working in a secure way regardless of the distance or the number of participants. We started to set up the application and the infrastructure as a part of the DORII (Deployment of Remote Instrumentation Infrastructure) project. In the following sections we will try to explain the steps that led us to integration, the experience perceived by the testers, the results obtained so far and future perspectives.
- Conference Article
6
- 10.1109/icpads.2016.0138
- Dec 1, 2016
Like time complexity models that have significantly contributed to the analysis and development of fast algorithms, energy complexity models for parallel algorithms are desired as crucial means to develop energy efficient algorithms for ubiquitous multicore platforms. Ideal energy complexity models should be validated on real multicore platforms and applicable to a wide range of parallel algorithms. However, existing energy complexity models for parallel algorithms are either theoretical without model validation or algorithm-specific without ability to analyze energy complexity for a wide-range of parallel algorithms. This paper presents a new general validated energy complexity model for parallel (multithreaded) algorithms. The new model abstracts away possible multicore platforms by their static and dynamic energy of computational operations and data access, and derives the energy complexity of a given algorithm from its work, span and I/O complexity. The new model is validated by different sparse matrix vector multiplication (SpMV) algorithms and dense matrix multiplication (matmul) algorithms running on high performance computing (HPC) platforms (e.g., Intel Xeon and Xeon Phi). The new energy complexity model is able to characterize and compare the energy consumption of SpMV and matmul kernels according to three aspects: different algorithms, different input matrix types and different platforms. The prediction of the new model regarding which algorithm consumes more energy with different inputs on different platforms, is confirmed by the experimental results. In order to improve the usability and accuracy of the new model for a wide range of platforms, the platform parameters of ICE model are provided for eleven platforms including HPC, accelerator and embedded platforms.
- Research Article
1
- 10.1177/17470161251361575
- Aug 1, 2025
- Research ethics
While genomic data sharing enhances transparency and research efficiency, it also raises significant ethical and social challenges. This study explored stakeholders' perspectives on these issues, particularly around privacy, confidentiality, and equity in collaborative research. A phenomenological qualitative study was conducted between August and December 2023 at Makerere University College of Health Sciences, other research-intensive institutions, and national regulatory bodies. The study engaged 86 participants: 47 key informants (16 researchers, 14 ethics committee members, nine community advisory board members, and eight research regulators) and four deliberative focus group discussions with 39 participants. Interviews were transcribed verbatim, and thematic analysis was conducted using NVivo 14. Three major themes emerged: (1) stakeholders' experiences in genomic research, including their roles as participants, implementers, or overseers; (2) ethical concerns, such as informed consent, third-party data access, inequities between high-income and low- and middle-income country (LMIC) researchers and participants, and the lack of benefit-sharing frameworks; and (3) social implications, including stigma, discrimination, labeling, community perceptions of fairness, and the need for meaningful engagement. Participants emphasized the importance of protecting participant rights, promoting equity, and ensuring robust data governance and security. The theoretical frameworks of principlism and distributive justice provided a valuable lens for examining these concerns, particularly by highlighting the need to safeguard privacy and fairly distribute responsibilities and benefits in global collaborations. Participants also noted that perceptions of fairness are shaped by trust, local context, and past experiences with research factors that are critical for building equitable and respectful partnerships. This study underscores the urgent need to strengthen protections for research participants and promote fairness in genomic data sharing. Policies should, if adopted, emphasize culturally contextualized consent, active community engagement, restricted third-party data access, and strong data protection mechanisms to address existing inequities and prevent misuse.
- Book Chapter
1
- 10.1007/978-981-19-6901-0_121
- Jan 1, 2022
With the development of 5th generation plus and 6 th generation (5G+ & 6G) wireless communication networks, the cloud computing and wireless access network combine tightly to meet diversified industry demands in different deployment and service scenario. As the computing force demand of cloud network increasing, AAL (Acceleration Abstraction Layer) is introduced to the network architecture to unload the computing task from CPU to GPU/ FPGA and other hardware accelerators, which could accelerate the task processing with parallel dealing with tasks in hardware accelerators to save the CPU resource to deal with other tasks. However the resource scheduling of network and computing resource have not been considered together. We view the network slice resource, the computing resource and storage of cloud-network system and design three algorithm to optimize the resource scheduling, the key of each algorithm is weighted the hardware computing and storage resource and the radio wireless slice resource to schedule together.KeywordsCloudContainerNetwork sliceHardware acceleratorAAL