3,019 publications found
Sort by
A Strategic Decision Support System Using Multiplayer Non-Cooperative Games for Resource Allocation After Natural Disasters

A severe natural disaster causes concurrent emergencies in separate geographical locations. However, tackling all the emergencies simultaneously with the limited available resources is challenging. A novel single-stage, non-cooperative multiplayer game-based solution approach for resource allocation is proposed in this paper, where the crisis locations in the demand of resources are treated as the players. The game-based decision support system (DSS) is intended to be implemented by the concerned disaster management authority to obtain a practical strategy for the allocation of indivisible and divisible resources among individual players in a limited resource environment. Any feasible allocation is associated with some cost on the basis of a non-monetary cost function. A discrete strategic game is formulated to tackle indivisible resource allocation and a continuous-kernel game for divisible resources. A mathematical analysis establishes that the game with the proposed cost function possesses at least one pure strategy Nash equilibrium (PSNE) for both types of games. Based on different selection criteria, a particular PSNE is chosen from the collection of multiple PSNEs and used as a desirable resource allocation strategy. A complexity analysis of the proposed algorithm is also carried out. Case studies are given in this paper to demonstrate the results developed. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —The cost function proposed in this paper to formulate the non-cooperative game-based DSS will be useful to disaster management authorities in resource allocation problems after any natural disaster. While distributing a limited quantity of essential resources, the cost function maintains an overall balance so that none of the disaster locations is deprived or favoured unreasonably. The feasible allocations of the existing resources are determined by the solutions of a multiplayer game. The case study indicates that the disaster management authority can choose different post-disaster ground truths to compute the possible resource demands of the disaster locations. Using the demand vector (a vector comprising the demands of the players as elements) and the number of available resources as the inputs, the game-theoretic model generates the allocation vector, according to which the disaster locations can be allocated resources.

Dynamic Path Planning and Motion Control of Microrobotic Swarms for Mobile Target Tracking

Magnetic field-driven microrobotic swarms have drawn extensive attention, especially in the field of automatic control. Realizing dynamic path planning and motion control of microrobotic swarms for mobile target tracking is one of the important tasks that still remains unsolved. In this paper, we firstly present an enhanced bidirectional rapidly-exploring random tree star (EB-RRT*) algorithm considering the physical size of the swarm to dynamically plan the optimal path for obstacle avoidance. An image-guided motion controller, which consists of a direction controller and a Genetic Algorithm based Linear Quadratic Regulator (GA-LQR) velocity controller, is then proposed to realize mobile target tracking using microrobotic swarms. Targeted bursting algorithm is subsequently developed to meet the requirement of tracking high-speed ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> , 20 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu m/s$</tex-math> </inline-formula> ) mobile targets. Simulations are performed to validate the proposed methods and obtain the proper ranges of the input parameters for the controllers. Finally, the control effectiveness of mobile target tracking in different conditions and environments is validated by experimental results. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —The motivation of this work is to develop an effective control scheme for mobile target tracking using microrobotic swarms. Conventional control schemes mainly focus on the control of single microrobots to reach static targets, and thus the desired path is fixed once planned. In addition, the motion of single monolithic microrobots can be modelled precisely. However, in mobile target tracking using microrobotic swarms, dynamic planning algorithms are demanded to frequently update the desired path. Swarms consisting of millions of micro-agents are also difficult to be modelled due to the complex agent-agent interactions. In this work, an effective control scheme consisting of a dynamic path planner, a motion controller and a targeted bursting unit is developed. Real-time dynamic paths will be planned even though the positions of the swarm and the target change rapidly. The precise control of the swarm direction and velocity are achieved, and moreover, using the targeted bursting algorithm, the swarm can be accelerated to approach mobile targets accurately with higher efficiency. Experimental results validates the proposed tracking strategy in different environments with virtual obstacles. The proposed control scheme paves the way for a better understanding of advanced motion control methods for microrobotic swarms.

The Impact of Processing Time Variations on Swap Sequence Performance in Dual-Armed Cluster Tools

The performance of a swap sequence is analyzed by assuming cyclic scheduling in dual-armed cluster tools with processing time variations. A dual-armed cluster tool consists of multiple processing modules (PMs), one material handling robot that can hold two wafers at the same time, and loadlocks where wafer cassettes are loaded or unloaded. The swap sequence in a dual-armed cluster tool is widely used in practice and known to be optimal with deterministic processing times when the bottleneck PM’s workload is larger than the robot workload. However, in practice, processing times on a PM can have a small variation, which leads to a different processing time of each wafer on the PM. Hence, when the processing time variation is introduced, the performance of the swap sequence needs to be analyzed. This paper first defines a fundamental cycle and analyzes its cycle time. It then proposes optimality conditions of the swap sequence and performs numerical experiments to show the effectiveness of the sequence. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —A dual-armed cluster tool used for semiconductor manufacturing processes is usually operated with a swap sequence because it is simple, easy to control, and proven to be optimal with deterministic processing times. However, studies on the performance of the swap sequence are still limited with processing times varying in PMs which often occur in practice. Hence, this study shows the effectiveness of the swap sequence with the processing time variations by analyzing cycle times, optimality conditions, and performing numerical experiments.

Vibration Control for a Three-Dimensional Variable Length Flexible String With Time-Varying Actuator Faults and Unknown Control Directions

In this study, actuator faults and unknown control directions are taken into account concurrently for a three-dimensional variable length flexible string system modeled by partial differential equations (PDEs). Different from the unknown fault parameters being restricted to just positive constants in the previous fault-tolerance research on PDEs systems, the time-varying actuator faults are considered, which is more practical but challenging. Additionally, the control directions of the system are unknown which can also be different in three directions. To overcome the difficulties, adaptive control laws employing the Nussbaum function are designed, and some auxiliary control signals are intended for service. Under the proposed boundary control scheme, uniform ultimate boundedness of the closed-loop system is guaranteed, and the deflections of the flexible string can be suppressed to a small neighborhood of zero. Finally, the simulations are implemented to indicate the control effect. <i>Note to Practitioners</i>&#x2014;String structural systems are widely used in fields of indoor and outdoor industrial and mining enterprises, ports, wharves, and seabeds. To improve work efficiency and safety, effectively suppressing the vibration of the flexible string is a crucial problem to be solved. This paper proposes a novel adaptive boundary control scheme to reduce the deflections of the flexible string system subject to time-varying actuator faults and unknown control directions in three-dimensional space. The control scheme is designed without simplification, which can provide a theoretical basis and a feasible solution for the control of flexible string systems in practical engineering. In the future, we will address the modeling and control problems of moving three-dimensional variable length flexible string systems.

Dynamic Scheduling Stochastic Multiworkflows With Deadline Constraints in Clouds

Nowadays, more and more workflows with different computing requirements are migrated to clouds and executed with cloud resources. In this work, we study the problem of stochastic multi-workflows scheduling in clouds and formalize this problem as an optimization problem that is NP-hard. To solve this problem, an efficient stochastic multi-workflows dynamic scheduling algorithm called SMWDSA is designed to schedule multi-workflows with deadline constraints for optimizing multi-workflows scheduling cost. The proposed SMWDSA consists of three stages including multi-workflows preprocessing, multi-workflow scheduling and scheduling feedback. In SMWDSA, a novel task sub-deadlines assignment stretagy is design to assign the task sub-deadlines to each task of multi-workflows for meeting workflow deadline constraints. Then, we propose a task scheduling method based on the minimal time slot availability to execution task for minimizing workflow scheduling cost while meetingt workflow deadlines. Finally, a scheduling feedback strategy is adopted to update the priorities and sub-deadlines of unscheduled tasks, for further minimizing workflow scheduling cost. We conduct the experiments using both synthetic data and real-world data to evaluate SMWDSA. The results demonstrate the superiority of SMWDSA as compared with the state-of-the-art algorithms. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —Workflow scheduling in clouds is significantly challenging due to not only the large scale of workflows but also the elasticity and heterogeneity of cloud resources. Moreover, minimizing workflow scheduling cost and satisfying workflow deadlines are two critical issues in scheduling with cloud resources, especially the uncertainty of workflow arrive time and task execution time are considered. To meet workflow deadlines, it is an effective strategy to decompose workflow deadline constraints into task sub-deadline constraints. To minimize the workflow scheduling cost, each task in a workflow needs to be assigned to their most suitable VMs for execution. This article presents a novel workflow scheduling algorithm to schedule stochastic multi-workflows in clouds for optimizing multi-workflows scheduling cost and meeting workflows deadlines. This algorithm obtains the task sub-deadline constraints based on the characteristics of workflows for meeting the worklfow deadline constraint. Under the premise of meeting task deadlines, it schedules tasks to a VM with minimum the slot time, for minimizing the cost. Case studies based on well-known real-world workflows data sets suggest that it outperforms traditional ones in terms of success and cost of multi-workflows scheduling. It can thus aid the design and optimization of multi-workflows scheduling in a cloud environment. It can help practitioners better manage the scheduling cost and performance of real-world applications built upon cloud services.

Spatial–Temporal Load Balancing and Coordination of Multi-Robot Stations

Cycle time minimization in multi-robot manufacturing stations is computationally challenging. This is due to the many aspects that need to be accounted for, including assigning process tasks to robots, specifying robot configurations at tasks, sequencing, planning motions, and coordinating the robots to avoid collisions. Hence, to find good solutions, often some assumptions are made and/or the problem is divided into subproblems—often limiting the set of solutions with the risk of excluding the best ones. In this study, we generalize the completely disjoint solution method that challenges the so-called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">shortest path</i> assumption, i.e., to let each robot use its shortest collision-free motion between any two configurations, regardless of the other robots. We devise a generalized method called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">spatial–temporal load balancing and coordination</i> , which prevents robot–robot collisions by a sequence of disjoint solutions, guiding task assignments, sequences, and robot motions (path and velocity). We study both artificial and industrial instances. For some of them, our suggested method is superior to methods based on the shortest path assumption, with as much as a 28% reduction in cycle time. Moreover, for problem instances with no special structure, we establish that the shortest path assumption is often reasonable. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —This work is motivated by a particular industrial problem instance of a spot-welding station with two robots and where welds are placed along the edge of a workpiece. Due to the special geometry of the instance one robot can only perform welds in the middle of the edge and the other only at the ends. As a result, if the robots use their shortest motions between welds, then waiting times are required to prevent collisions. Moreover, the tasks are too close to each other to allow for a completely disjoint solution. Hence, we suggest a method based on sequence of disjoint solutions.

Open Access
Adaptive Sampling and Quick Anomaly Detection in Large Networks

The monitoring of data streams with a network structure have drawn increasing attention due to its wide applications in modern process control. In these applications, high-dimensional sensor nodes are interconnected with an underlying network topology. In such a case, abnormalities occurring to any node may propagate dynamically across the network and cause changes of other nodes over time. Furthermore, high dimensionality of such data significantly increased the cost of resources for data transmission and computation, such that only partial observations can be transmitted or processed in practice. Overall, how to quickly detect abnormalities in such large networks with resource constraints remains a challenge, especially due to the sampling uncertainty under the dynamic anomaly occurrences and network-based patterns. In this paper, we incorporate network structure information into the monitoring and adaptive sampling methodologies for quick anomaly detection in large networks where only partial observations are available. We develop a general monitoring and adaptive sampling method and further extend it to the case with memory constraints, both of which exploit network distance and centrality information for better process monitoring and identification of abnormalities. Theoretical investigations of the proposed methods demonstrate their sampling efficiency on balancing between exploration and exploitation, as well as the detection performance guarantee. Numerical simulations and a case study on power network have demonstrated the superiority of the proposed methods in detecting various types of shifts. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —Continuous monitoring of networks for anomalous events is critical for a large number of applications involving power networks, computer networks, epidemiological surveillance, social networks, etc. This paper aims at addressing the challenges in monitoring large networks in cases where monitoring resources are limited such that only a subset of nodes in the network is observable. Specifically, we integrate network structure information of nodes for constructing sequential detection methods via effective data augmentation, and for designing adaptive sampling algorithms to observe suspicious nodes that are likely to be abnormal. Then, the method is further generalized to the case that the memory of the computation is also constrained due to the network size. The developed method is greatly beneficial and effective for various anomaly patterns, especially when the initial anomaly randomly occurs to nodes in the network. The proposed methods are demonstrated to be capable of quickly detecting changes in the network and dynamically changes the sampling priority based on online observations in various cases, as shown in the theoretical investigation, simulations and case studies.

Drone Stations-Aided Beyond-Battery-Lifetime Flight Planning for Parcel Delivery

This paper considers using drones to conduct the last-mile parcel delivery. To enable the beyond-battery-lifetime flight, drone stations are considered to replace or recharge the battery for drones. We focus on the flight planning problem with the goal of minimizing the total travel time from the depot to a customer, a key indicator of the quality of service. We investigate four typical ways for the drone to get extra energy at drone stations: 1) replacing the battery with a fresh one, 2) recharging the battery to the full capacity, 3) recharging the battery to the optimal level, and 4) recharging the battery to the optimal level accounting for the availability of drone stations (i.e., whether a drone station is occupied by other drones). While the first two scenarios can be formulated following the framework of integer linear programming, the last two scenarios turn into mixed-integer nonlinear programming problems. To address the later problems, we present a framework in which discretized state graphs are constructed first and then the optimal paths are found by graph searching algorithms. We propose a dynamic version of Dijkstra’s algorithm to deal with the unavailability issue of drone stations. The algorithm can quickly find the optimal flight path for a drone, and extensive computer-based experimental results have been presented to demonstrate the effectiveness of the proposed method. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —Multi-rotary unmanned aerial vehicles (UAVs), also known as drones, have been regarded as a promising means to reshape future logistics. To save human labour and reduce cost, many giant logistics companies have been dedicated to developing various drones to deliver light and small parcels during the past decade. However, due to the limitation of payload, the battery capacity is constrained, which prevents drones from long-distance flights. Practitioners have tried the drone-vehicle collaboration method, but this still requires human labour to participate. In this paper, we present a framework where drones autonomously conduct long-distance delivery with the assistance of drone stations. It is worth pointing out that such a framework is not to replace the ground delivery method but to serve as an alternative to the ground counterpart for small and light parcels. A particular focus is on the flight planning from the depot to a destination, which includes not only a sequence of drone stations to stop at but also the corresponding rest time to recharge the battery. Several typical scenarios about battery recharging are discussed, and a dynamic version of Dijkstra’s algorithm is presented to deal with the challenging case where drone station resources are limited. The presented approach is able to find out the optimal flight plan quickly.

Model and Data Driven Machine Learning Approach for Analyzing the Vulnerability to Cascading Outages With Random Initial States in Power Systems

In this paper, a hybrid machine learning model is applied to evaluate the relationship between random initial states and the power system’s vulnerability to cascading outages. A cascading outage simulator (CS), which uses off-line AC power flows, is proposed for generating training data. The initial states are randomly selected and the CS model is deployed for each initial state, where power system generation and loads are adjusted dynamically and power flows are redistributed to quantify the vulnerability metric. Furthermore, the proposed hybrid machine learning model deploys a combined Support Vector Machine (SVM) classification and Gradient Boosting Regression (GBR) to improve the learning precision. The classification model is trained by SVM, which divides the data into two categories with and without load shedding. Then, GBR is adopted only for the data with load shedding to determine the relationship between input power outage states and the vulnerability metric. The proposed vulnerability analysis approach is applied to several test systems and the results are analyzed. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —The power system vulnerability can be quantified by cascading outage simulations. However, there are two challenges: i) there are a huge number of possible initial states and we cannot enumerate all these initial states for the cascading outage simulation. Neither can we precisely quantify the bus vulnerability. ii) The cascading outage simulation may be time-consuming for large-scale power systems, which is challenging for the online application. To address the above challenges, we expect to design a machine learning technique to predict the power system vulnerability, which can train the model in an offline way and then use it for the online application. Firstly, since there is not enough operation data from practical power systems, we develop a cascading outage simulator, using off-line AC power flows, for generating synthetic training data. Secondly, we observe that the training precision by directly applying the regression model may be very poor because the output of the machine learning model may take on an uneven distribution concerning input parameters. Thus, we propose a hybrid machine learning model with a combined classification and regression method, where the classification model is employed to remove the data without the load shedding, and the regression model then determines the relationship between input power outage states and the vulnerability metric. The proposed model and method have been tested on several systems including a practical large-scale Polish power system to show the effectiveness.