Thoughts on sub-Turing interactive computability
The article contains an outline of a possible new direction for Computability Logic, focused on computability without infinite memory or other impossible-to-possess computational resources. The new approach would see such resources as external rather than internal to computing devices. They could or should be accounted for explicitly in the antecedents of logical formulas expressing computational problems.
- Research Article
31
- 10.1186/1758-2946-3-16
- May 16, 2011
- Journal of Cheminformatics
BackgroundThe diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS) may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web.ResultsWe have created a prototype framework of Semantic Automated Discovery and Integration (SADI) framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource envelopment as SADI services through service reuse and ease of integration of computational functionality into formal ontologies.ConclusionsThe work we present here may trigger a major paradigm shift in the distribution of computational resources in chemistry. We conclude that envelopment of chemical computational resources as SADI SWS facilitates interdisciplinary research by enabling the definition of computational problems in terms of ontologies and formal logical statements instead of cumbersome and application-specific tasks and workflows.
- Research Article
74
- 10.1177/109434200001400308
- Aug 1, 2000
- The International Journal of High Performance Computing Applications
Striking progress of network technology is enabling high performance global computing, in which computational and data resources in a wide-area network (WAN) are transparently employed to solve large-scale problems. Several high performance global computing systems, such as Ninf, NetSolve, RCS, Legion, and Globus, have already been proposed. Each of these systems proposes to effectively achieve high performance with some efficient scheduling scheme, whereby a scheduler selects a set of appropriate computing resources that solve the client’s computational problem. This paper proposes a performance evaluation model for effective scheduling in global computing systems. The proposed model represents a global computing system by a queuing network, in which servers and networks are represented by queuing systems. Verification of the proposed model and evaluation of scheduling schemes on the model showed that the model could simulate behavior of an actual global computing system and scheduling on the system effectively.
- Research Article
27
- 10.1109/access.2019.2922702
- Jan 1, 2019
- IEEE Access
The cloud radio access network (C-RAN) with mobile edge computing (MEC) structure which consists of a baseband unit (BBU) pool integrating with an MEC server and several remote radio heads (RRHs) beside the mobile terminals can help users with computational resource-intensive tasks and bring extra profits to network operator at the same time. This paper presents a novel task-aware C-RAN with MEC structure and formulates a profit maximization problem by jointly optimizing offloading strategy, radio and computational resources allocation under the constraints of offloading latency, fronthaul capacity along with limited bandwidth and computational resource. To solve this NP-hard optimization problem in a distributed and efficient way, we propose a spectrum efficiency (SE)-based joint optimization for offloading and resource allocation (SJOORA) scheme which decomposes the original problem into two sub-problems. A SE-based offloading strategy is proposed with confirmed resource allocation, and on the other hand, bandwidth and computational resource allocation problem is solved by using a Lagrangian multiplier method with predetermined offloading strategy. Finally, by solving these two sub-problems iteratively, a suboptimal solution is obtained for the original problem. The simulation results show that the proposed SJOORA scheme can effectively increase the profit of network operator with relative lower complexity.
- Research Article
18
- 10.1109/access.2020.3029253
- Jan 1, 2020
- IEEE Access
The ultra-reliable and low latency communication (URLLC) and massive machine type communication (mMTC) in 5G are envisioned to support intelligent automation in the heterogeneous Factory of Future (FoF) networks, and Mobile-edge computing (MEC) is considered to be a promising system for enabling real-time task processing at the edge of the network. In the future factory, production machines, and environmental monitoring devices will be endowed with the wireless connecting for mobility. These devices are deployed for running complicated real-time tasks. To make such mission-critical tasks being processed in time, parts of the tasks should be completed with the assistance of the edge server or even the cloud. In this work, we jointly investigate the partial task offloading, computation, and communication (licensed and unlicensed) resource allocation problem in the trade-off between overall power consumption and quality of service (QoS) satisfaction. A 2-tier MEC-cloud framework is provided, wherein the IoT mobile devices (MDs) are able to partition the tasks into segments and offload them to the MEC and the cloud server. Considering the limits of communication and computation resources, we proposed a mechanism call 5G and NR-U opportunity-cost-based offloading algorithm (5G/NR-U OCBOA) to optimize resource allocation. Within the mechanism, there are two proposed algorithms, 5G OCBOA is for the licensed-only case, and NR-U OCBOA dedicates on unlicensed one. We iteratively perform the two algorithms to get the final solution. The simulation results show that our low-complexity algorithms almost outperform the other benchmark greedy algorithms. The proposed algorithm is up to 59.3% MD blocking probability less, up to 58.7% power saving gain, and up to 47.6% more QoS gain.
- Research Article
21
- 10.1016/j.sysarc.2021.102331
- Dec 1, 2021
- Journal of Systems Architecture
Optimal computational resource pricing in vehicular edge computing: A Stackelberg game approach
- Conference Article
- 10.3997/2214-4609.20149522
- May 23, 2011
An important part of research activities in geophysics represents solution of very big computational problems. For economic reasons research labs usually do not have own computational resources of necessary for those tasks performance. Moreover, even the most powerful companies sometimes can’t allocate enough computational resources to solve research problems for reasonable time. However, multiple cooperating labs may join their resources to solve bigger problems. Grid computing is a technology primary devoted to solution of computation intensive problems by Virtual Organizations of independent trusted partners. European Grid Initiative merges huge computational and storage resources for grid users. To benefit from access to those resources one has to ‘gridificate’ application technologies. This means paradigm shift from High Performance Computing to High Throughput Computing, from parallel computing to distributed one. 2.5D full-wave modelling approach relevant to grid technology is discussed. The approach has been tested in Ukrainian Academic Grid.
- Research Article
8
- 10.1109/access.2022.3152531
- Jan 1, 2022
- IEEE Access
Since mobile devices typically have limited computation resources, offloading computation tasks to fog access points (F-APs) is a promising approach to support delay-sensitive and computation-intensive applications. This paper considers joint computation and communication resource allocation for multiuser multi-server systems, which aims to maximize the number of users being served and minimize the total energy consumption subject to delay tolerance constraints. The joint computation and communication resource allocation problem is solved optimally for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes. The joint user pairing and fog access point assignment problem for NOMA is proved to be NP-hard. For both NOMA and OMA, heuristic and optimal algorithms based on graph matching are designed. The optimal algorithms, though of high complexity, allow NOMA and OMA to be compared at their full potential and serve as benchmarks for evaluating the heuristic algorithms. Simulation results show that NOMA significantly outperforms OMA in terms of outage probability and energy consumption, especially for tight delay tolerance constraints and large computational tasks. Simulation results also demonstrate that our proposed NOMA and OMA schemes significantly outperform the swap-enabled matching algorithm widely used in the literature.
- Research Article
- 10.14489/vkit.2024.10.pp.034-041
- Oct 1, 2024
- Vestnik komp'iuternykh i informatsionnykh tekhnologii
Currently, the relevance of implementing distributed computing in geographically distributed heterogeneous dynamic computing environments has increased. This is due to both the need to localize computing outside of cloud structures, on the one hand, and the development of computing and network technologies, on the other. The limited computing resources of devices and their autonomy raise the issue of optimizing computing processes implemented in a distributed manner. Studying the issue of increasing the efficiency of organizing computing by distributing computing resources, it was concluded that the currently used models of environments and formulations of problems for optimizing the use of computing resources do not take into account the resource costs that appear both during data transit over the network and in the case of data transfer between computing tasks, and do not take into account the overhead costs that appear when solving the resource distribution problem. This article proposes a general formulation of the multicriterial optimization problem, where the controlled parameters include the expenditure of computing resources on data transfer through transit devices and the computational complexity of solving the problem of distributing computing resources. The developed method of organizing efficient computations in distributed heterogeneous dynamic environments implements a greedy strategy for selecting metaheuristic optimization algorithms that allow achieving a given accuracy with minimal resource costs with the possibility of improving the obtained result within the constraints of the computational resource allocation problem. The novelty of the research results lies in a new formulation of the resource allocation problem and the method for solving it. The results of the experimental study confirm the effectiveness of the developed method, allowing to reduce the computational complexity of solving the problem by 2 times, taking into account the specified requirements for the accuracy of the obtained solution.
- Research Article
1
- 10.3390/fi16090312
- Aug 28, 2024
- Future Internet
Massive computational resources are required by a booming number of artificial intelligence (AI) services in the communication network of the smart grid. To alleviate the computational pressure on data centers, edge computing first network (ECFN) can serve as an effective solution to realize distributed model training based on data parallelism for AI services in smart grid. Due to AI services with diversified types, an edge data center has a changing workload in different time periods. Selfish edge data centers from different edge suppliers are reluctant to share their computing resources without a rule for fair competition. AI services-oriented dynamic computational resource scheduling of edge data centers affects both the economic profit of AI service providers and computational resource utilization. This letter mainly discusses the partition and distribution of AI data based on distributed model training and dynamic computational resource scheduling problems among multiple edge data centers for AI services. To this end, a mixed integer linear programming (MILP) model and a Deep Reinforcement Learning (DRL)-based algorithm are proposed. Simulation results show that the proposed DRL-based algorithm outperforms the benchmark in terms of profit of AI service provider, backlog of distributed model training tasks, running time and multi-objective optimization.
- Research Article
20
- 10.1145/1131313.1131319
- Apr 1, 2006
- ACM Transactions on Computational Logic
Computability logic is a formal theory of computational tasks and resources. Its formulas represent interactive computational problems, logical operators stand for operations on computational problems, and validity of a formula is understood as its being a scheme of problems that always have algorithmic solutions. The earlier article “Propositional computability logic I” proved soundness and completeness for the (in a sense) minimal nontrivial fragment CL1 of computability logic. The present article extends that result to the significantly more expressive propositional system CL2 . What makes CL2 more expressive than CL1 is the presence of two sorts of atoms in its language: elementary atoms , representing elementary computational problems (i.e. predicates), and general atoms , representing arbitrary computational problems. CL2 conservatively extends CL1 , with the latter being nothing but the general-atom-free fragment of the former.
- Conference Article
2
- 10.1109/ijcnn.2001.938767
- Jul 15, 2001
Attempts to use neural networks to model recursive symbolic processes like logic have met with some success, but have faced serious hurdles caused by the limitations of standard connectionist coding schemes. As a contribution to this effort, this paper presents recent work in infinite recursive auto-associative memory, a new connectionist unification model based on a fusion of recurrent neural networks with fractal geometry. Using a logical programming language as our modeling domain, we show how this approach solves many of the problems faced by earlier connectionist models, supporting arbitrarily large sets of logical expressions.
- Conference Article
- 10.1109/isit.2007.4557274
- Jun 1, 2007
We consider information in presence of computation limitations. Specifically, we consider the following question. What is the output entropy of a computation if no (or limited) computation resources are available. We first explain two notions of effective input information and the average number of decisions in a computation problem and show how these notions can helps us understand the output entropy of a computation when computation resources are limited. Then we propose a procedure to evaluate output entropy in such scenarios and show that it satisfies some nice properties.
- Research Article
1
- 10.12694/scpe.v6i3.330
- Jan 1, 2005
- Scalable Computing Practice and Experience
Challenges Concerning Symbolic Computations on Grids
- Research Article
25
- 10.1109/jiot.2022.3153996
- Apr 1, 2023
- IEEE Internet of Things Journal
In conventional federated learning (FL), multiple edge devices holding local data jointly train a machine learning model by communicating learning updates with a centralized aggregator without exchanging their data samples. Owing to the communication and computation bottleneck at the centralized aggregator and inaccurate learning model caused by the non-IID data, we here consider a two-tier FL network, in which IoT nodes are the core clients that hold data, the model aggregators at the middle tier are the low altitude aerial platforms (UAVs), and the model aggregator at the top-most layer is the high altitude aerial platform (UAV with relatively high altitude). Under the assumption that each IoT node has parallel computing ability, we study the energy-efficient computation and communication resource allocation in such a network within some time budget. Upon formulating the problem as an optimization problem, we solve the computation and communication resource allocation problems as the separate subproblems within a time frame, and then propose an iterative algorithm to solve the entire problem jointly. More specifically, We solve both the energyefficient computation and communication resource allocation subproblems using the dual decomposition technique, and then apply a bisection search-based recursive technique to solve the entire energy efficiency problem jointly. Moreover, we propose offline and online client scheduling schemes that not only select the optimal edge nodes for association but also assign workload to each client based on the data quality and workload constraint. With real data, extensive simulations are conducted to verify the effectiveness of the proposed resource allocation scheme. The results further reveal that the learning performance not only is dependent on the computation and communication energy consumption of the FL process but also the model divergence weight owing to the non-IID data at client IoT nodes.
- Dissertation
- 10.6092/polito/porto/2643159
- Jan 1, 2016
Silicon Nanowires (SiNWs) are considered the fundamental component blocks of future nanoelectronics. Many interesting properties have gained them such a prominent position in the investigation in recent decades. Large surface-to-volume ratio, bio-compatibility, band-gap tuning are among the most appealing features of SiNWs. More importantly, in the ongoing process of dimension miniaturization, SiNWs compatibility with the existing and reliable silicon technology stands as a fundamental advantage. Consequently, the employment of SiNWs spred in several application fields: from computational logic where SiNWs are used to realize transistors, to bio-chemical sensing and nanophotonic applications. In this thesis work we concentrate our attention on the employment of SiNWs in computational logic and bio-chemical sensing. In particular, we aim at giving a contribution in the modelling and simulation of SiNW-based electron devices. Given the current intense investigation of new devices, the modelling of their electrical behaviour is strongly required. On one side, modelling procedures could give an insight on the physical phenomena of transport in nanometer scale systems where quantum effects are dominant. On the other side, the availability of compact models for actual devices can be of undeniable help in the future design process. This work is divided into two parts. After a brief introduction on Silicon Nanowires, the main fabrication techniques and their properties, the first part is dedicated to the modelling of Multiple-Independent Gate Transistors, a new generation of devices arisen from the composition of Gate-All-Around Transistors, finFETs and Double-Gate Transistors. Interesting applications resulting from their employment are Vertically-stacked Silicon Nanowire FETs, known to have an ambipolar behaviour, and Silicon Nanowire Arrays. We will present a compact numerical model for composite Multiple-Independent Gate Transistors which allows to compute current and voltages in complex structures. Validation of the model through simulation proves the accuracy and the computational efficiency of the resulting model. The second part of the thesis work is instead devoted to Silicon Nanowires for bio-chemical sensing. In this respect, major attention is given to Porous Silicon (PS), a non-crystalline material which demonstrated peculiar features apt for sensing. Given its not regular microscopic morphology made of a complex network of crystalline and non-crystalline regions, PS has large surface-to-volume ratio and a relevant chemical reactivity at room temperature. In this work we start from the fabrication of PS nanowires at Istituto Nazionale di Ricerca Metrologica in Torino (I.N.Ri.M.) to devise two main models for PSNWs which can be used to understand the effects of porosity on electron transport in these structures. The two modelling procedures have different validity regimes and efficiently take into account quantum effects. Their description and results are presented. The last part of the thesis is devoted to the impact of surface interaction of molecular compounds and dielectric materials on the transport properties of SiNWs. Knowing how molecules interact with silicon atoms and how the conductance of the wire is affected is indeed the core of SiNWs used for bio-chemical sensing. In order to study the phenomena involved, we performed ab-initio simulations of silicon surface interacting with SO2 and NO2 via the SIESTA package, implementing DFT code. The calculations were performed at Institut de Ciencia De Materials de Barcelona (ICMAB-CSIC) using their computational resources. The results of this simulation step are then exploited to perform simulation of systems made of an enormous quantity of atoms. Due to their large dimensions, atomistic simulations are not affordable and other approaches are necessary. Consequently, calculations with physics-based softwares on a larger spatial scale were adopted. The descripti
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.