Employment and Unemployment

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

In creating the Organisation for Economic Co-operation and Development in 1960 as the successor to the Organisation for European Economic Co-operation, the (then) twenty signatory countries to the OECD convention ‘established their basic aims as the promoting of policies designed: a) to achieve the highest sustainable economic growth and employment and a rising standard of living in Member countries, while maintaining financial stability, and thus to contribute to the development of the world economy; b) to contribute to sound economic expansion in member as well as non-member countries in the process of economic development; and c) to contribute to the expansion of world trade on a multilateral, non-discriminatory basis in accordance with international obligations’1.KeywordsLabour MarketUnemployment RateEmployment GrowthLabour Force SurveyMacroeconomic PolicyThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Similar Papers
  • Research Article
  • Cite Count Icon 35
  • 10.1109/tnnls.2020.3026114
A Bilevel Learning Model and Algorithm for Self-Organizing Feed-Forward Neural Networks for Pattern Classification.
  • Oct 5, 2020
  • IEEE Transactions on Neural Networks and Learning Systems
  • Hong Li + 1 more

Conventional artificial neural network (ANN) learning algorithms for classification tasks, either derivative-based optimization algorithms or derivative-free optimization algorithms work by training ANN first (or training and validating ANN) and then testing ANN, which are a two-stage and one-pass learning mechanism. Thus, this learning mechanism may not guarantee the generalization ability of a trained ANN. In this article, a novel bilevel learning model is constructed for self-organizing feed-forward neural network (FFNN), in which the training and testing processes are integrated into a unified framework. In this bilevel model, the upper level optimization problem is built for testing error on testing data set and network architecture based on network complexity, whereas the lower level optimization problem is constructed for network weights based on training error on training data set. For the bilevel framework, an interactive learning algorithm is proposed to optimize the architecture and weights of an FFNN with consideration of both training error and testing error. In this interactive learning algorithm, a hybrid binary particle swarm optimization (BPSO) taken as an upper level optimizer is used to self-organize network architecture, whereas the Levenberg-Marquardt (LM) algorithm as a lower level optimizer is utilized to optimize the connection weights of an FFNN. The bilevel learning model and algorithm have been tested on 20 benchmark classification problems. Experimental results demonstrate that the bilevel learning algorithm can significantly produce more compact FFNNs with more excellent generalization ability when compared with conventional learning algorithms.

  • Research Article
  • Cite Count Icon 24
  • 10.4018/ijimr.2013010105
An Advance Q Learning (AQL) Approach for Path Planning and Obstacle Avoidance of a Mobile Robot
  • Jan 1, 2013
  • International Journal of Intelligent Mechatronics and Robotics
  • Arpita Chakraborty + 1 more

The goal of this paper is to improve the performance of the well known Q learning algorithm, the robust technique of Machine learning to facilitate path planning in an environment. Until this time the Q learning algorithms like Classical Q learning(CQL)algorithm and Improved Q learning (IQL) algorithm deal with an environment without obstacles, while in a real environment an agent has to face obstacles very frequently. Hence this paper considers an environment with number of obstacles and has coined a new parameter, called ‘immediate penalty’ due to collision with an obstacle. Further the proposed technique has replaced the scalar ‘immediate reward’ function by ‘effective immediate reward’ function which consists of two fuzzy parameters named as, ‘immediate reward’ and ‘immediate penalty’. The fuzzification of these two important parameters not only improves the learning technique, it also strikes a balance between exploration and exploitation, the most challenging problem of Reinforcement Learning. The proposed algorithm stores the Q value for the best possible action at a state; as well it saves significant path planning time by suggesting the best action to adopt at each state to move to the next state. Eventually, the agent becomes more intelligent as it can smartly plan a collision free path avoiding obstacles from distance. The validation of the algorithm is studied through computer simulation in a maze like environment and also on KheperaII platform in real time. An analysis reveals that the Q Table, obtained by the proposed Advanced Q learning (AQL) algorithm, when used for path-planning application of mobile robots outperforms the classical and improved Q-learning.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3389/fnhum.2022.902183
Metric Learning in Freewill EEG Pre-Movement and Movement Intention Classification for Brain Machine Interfaces.
  • Jul 1, 2022
  • Frontiers in human neuroscience
  • William Plucknett + 2 more

Decoding movement related intentions is a key step to implement BMIs. Decoding EEG has been challenging due to its low spatial resolution and signal to noise ratio. Metric learning allows finding a representation of data in a way that captures a desired notion of similarity between data points. In this study, we investigate how metric learning can help finding a representation of the data to efficiently classify EEG movement and pre-movement intentions. We evaluate the effectiveness of the obtained representation by comparing classification the performance of a Support Vector Machine (SVM) as a classifier when trained on the original representation, called Euclidean, and representations obtained with three different metric learning algorithms, including Conditional Entropy Metric Learning (CEML), Neighborhood Component Analysis (NCA), and the Entropy Gap Metric Learning (EGML) algorithms. We examine different types of features, such as time and frequency components, which input to the metric learning algorithm, and both linear and non-linear SVM are applied to compare the classification accuracies on a publicly available EEG data set for two subjects (Subject B and C). Although metric learning algorithms do not increase the classification accuracies, their interpretability using an importance measure we define here, helps understanding data organization and how much each EEG channel contributes to the classification. In addition, among the metric learning algorithms we investigated, EGML shows the most robust performance due to its ability to compensate for differences in scale and correlations among variables. Furthermore, from the observed variations of the importance maps on the scalp and the classification accuracy, selecting an appropriate feature such as clipping the frequency range has a significant effect on the outcome of metric learning and subsequent classification. In our case, reducing the range of the frequency components to 0–5 Hz shows the best interpretability in both Subject B and C and classification accuracy for Subject C. Our experiments support potential benefits of using metric learning algorithms by providing visual explanation of the data projections that explain the inter class separations, using importance. This visualizes the contribution of features that can be related to brain function.

  • Dissertation
  • Cite Count Icon 1
  • 10.22215/etd/2019-13394
Learning in the Multi-Robot Pursuit Evasion Game
  • Feb 26, 2019
  • Ahmad Al-Talabi

This thesis investigates the learning issue for mobile robots playing the differential forms of the pursuit-evasion (PE) game by proposing different learning algorithms. The intended learning algorithms are used to reduce (1) the computational requirements as much as possible, without affecting the overall performance of the learning algorithm, (2) the learning time, and (3) the capture time and the possibility of collision among the pursuers, and to deal with multi-robot PE game with a single superior evader.The computational complexity is reduced by examining four methods of parameter tuning for the Q-Learning Fuzzy Inference System (QFIS) algorithm to decide which parameters are the best to tune and which parameters have a little impact on the performance. Then, two learning algorithms are proposed to reduce the learning time. The first one uses a two-stage learning technique that combines the PSO-based fuzzy logic control (FLC) algorithm with the QFIS algorithm. The PSO algorithm is used as a global optimizer, whereas the QFIS algorithm is used as a local optimizer. The second one is a modified version of the fuzzy-actor critic learning (FACL) algorithm, which is called fuzzy actor-critic learning Automaton (FACLA) algorithm. It uses the continuous actor-critic learning Automaton (CACLA) algorithm to tune the parameters of the FIS.After that, a decentralized learning technique is proposed to enable a group of two pursuers or more to capture a single inferior evader. It uses the FACLA algorithm together with the Kalman filter technique to reduce the capture time and to reduce the collision possibility among the pursuers. No communication among the pursuers is assumed. Finally, a decentralized learning algorithm is suggested and applied successfully for the case of multi-robot PE game with a single superior evader, in which all the players have similar speeds. A new reward function is suggested and used as a guide for the pursuer to move either to the intercepted point with the evader or to move in parallel with the evader depending on whether the pursuer can capture the evader or not. Simulation results show the feasibility of the proposed learning algorithms.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.1155/2013/926267
Multiagent Reinforcement Learning with Regret Matching for Robot Soccer
  • Jan 1, 2013
  • Mathematical Problems in Engineering
  • Qiang Liu + 2 more

This paper proposes a novel multiagent reinforcement learning (MARL) algorithm Nash- learning with regret matching, in which regret matching is used to speed up the well-known MARL algorithm Nash- learning. It is critical that choosing a suitable strategy for action selection to harmonize the relation between exploration and exploitation to enhance the ability of online learning for Nash- learning. In Markov Game the joint action of agents adopting regret matching algorithm can converge to a group of points of no-regret that can be viewed as coarse correlated equilibrium which includes Nash equilibrium in essence. It is can be inferred that regret matching can guide exploration of the state-action space so that the rate of convergence of Nash- learning algorithm can be increased. Simulation results on robot soccer validate that compared to original Nash- learning algorithm, the use of regret matching during the learning phase of Nash- learning has excellent ability of online learning and results in significant performance in terms of scores, average reward and policy convergence.

  • Research Article
  • Cite Count Icon 315
  • 10.1109/tsmcc.2011.2138694
Learning Algorithms for Fuzzy Cognitive Maps—A Review Study
  • Mar 1, 2012
  • IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)
  • Elpiniki I Papageorgiou

This study presents a survey on the most recent learning approaches and algorithms that are related to fuzzy cognitive maps (FCMs). FCMs are cognition fuzzy influence graphs, which are based on fuzzy logic and neural network aspects that inherit their main advantages. They gained momentum due to their dynamic characteristics and learning capabilities. These capabilities make them essential for modeling and decision-making tasks as they improve the performance of these tasks. An efficient number of learning algorithms for FCMs, by modifying the FCM weight matrix, have been developed in order to update the initial knowledge of human experts and/or include any knowledge from historical data in order to produce learned weights. The proposed learning techniques have mainly been concentrated on three directions: on the production of weight matrices on the basis of historical data, on adaptation of the cause-effect relationships of the FCM on the basis of experts' intervention, and on the production of weight matrices by combining experts' knowledge and data. The learning techniques could be categorized into three groups on the basis of the learning paradigm: Hebbian-based, population-based, and hybrid, which subsequently combine the main aspects of Hebbian-based- and population-based-type learning algorithms. These types of learning algorithms are the most efficient and widely used to train the FCMs, according to the existing literature. A survey on recent advances on learning methodologies and algorithms for FCMs that present their dynamic capabilities and application characteristics in diverse scientific fields is established here.

  • Conference Article
  • Cite Count Icon 28
  • 10.1145/225298.225342
On learning bounded-width branching programs
  • Jan 1, 1995
  • Funda Ergün + 2 more

In this paper, we study PAC-leaming algorithms for specialized classes of deterministic finite automata (DFA). Inpartictdar, we study branchingprogrsms, and we investigate the intluence of the width of the branching program on the difficulty of the learning problem. We first present a distribution-free algorithm for learning width-2 branching programs. We also give an algorithm for the proper learning of width-2 branching programs under uniform distribution on labeled samples. We then show that the existence of an efficient algorithm for learning width-3 branching programs would imply the existence of an efficient algorithm for learning DNF, which is not known to be the case. Fimlly, we show that the existence of an algorithm for learning width-3 branching programs would also yield an algorithm for learning a very restricted version of parity with noise.

  • Research Article
  • Cite Count Icon 65
  • 10.1109/tsg.2021.3054375
Anomaly Detection, Localization and Classification Using Drifting Synchrophasor Data Streams
  • Jan 26, 2021
  • IEEE Transactions on Smart Grid
  • A Ahmed + 3 more

With ongoing automation and digitization of the electric power system, several Phasor Measurement Units (PMUs) have been deployed for monitoring and control. PMU data can have multiple anomalies, and many of the researchers in the past have concentrated on training machine/deep learning algorithms offline for anomaly detection over PMU data (i.e., not in real-time). These machine/deep learning algorithms, when trained offline on a sample rather than a population of the dataset, fail to consider the dynamic behavior of the power grid in real-time, resulting in low accuracy. Considering the dynamic behavior of the power grid (e.g., change in load, generation, distributed energy resources (DERs) switching, network, controls), the definition of data anomalies varies in time and requires online training. A fundamental challenge is to enable online (i.e., real-time) training of machine/deep learning algorithms for anomaly detection over streaming PMU data. While machine/deep learning is often desirable to manage data streams, training a deep learning algorithm over streaming PMU data is nontrivial due to changes in data statistics caused by dynamic streaming data. This article proposes PMUNET: a novel device-level deep learning-based data-driven approach for anomaly detection, localization, and classification over streaming PMU data, using online learning and multivariate data-drift detection algorithm. Two variants of PMUNET, Dynamic data Change Driven Learning (DCDL) and Continuity Driven Learning (CDL), are proposed and compared. DCDL aims to train the deep learning algorithm whenever the definition of anomaly changes due to the power grid dynamics. On the other hand, CDL continuously trains the deep learning algorithm over the PMU data-stream. The experimental results verify that DCDL outperforms CDL and other efficient anomaly detection methods over multiple events such as faults and load/ generator/capacitor/DERs variations/switching for IEEE 14 and 39 Bus test system as well as real PMU industrial data. The result verifies that DCDL variant of PMUNET improves over existing approach with a gain of 2% - 10% in terms of accuracy, false-positive rate, and false-negative rate.

  • Research Article
  • Cite Count Icon 7
  • 10.1109/access.2020.2968983
Local Sigmoid Method: Non-Iterative Deterministic Learning Algorithm for Automatic Model Construction of Neural Network
  • Jan 1, 2020
  • IEEE Access
  • Syukron Abu Ishaq Alfarozi + 3 more

A non-iterative learning algorithm for artificial neural networks is an alternative to optimize the neural network parameters with extremely fast convergence time. Extreme learning machine (ELM) is one of the fastest learning algorithms based on a non-iterative method for a single hidden layer feedforward neural network (SLFN) model. ELM uses a randomization technique that requires a large number of hidden nodes to achieve the high accuracy. This leads to a large and complex model, which is slow at the inference time. Previously, we reported analytical incremental learning (AIL) algorithm, which is a compact model and a non-iterative deterministic learning algorithm, to be used as an alternative. However, AIL cannot grow its set of hidden nodes, due to the node saturation problem. Here, we describe a local sigmoid method (LSM) that is also a sufficiently compact model and a non-iterative deterministic learning algorithm to overcome both the ELM randomization and AIL node saturation problems. The LSM algorithm is based on “divide and conquer” method that divides the dataset into several subsets which are easier to optimize separately. Each subset can be associated with a local segment represented as a hidden node that preserves local information of the subset. This technique helps us to understand the function of each hidden node of the network built. Moreover, we can use such a technique to explain the function of hidden nodes learned by backpropagation, the iterative algorithm. Based on our experimental results, LSM is more accurate than other non-iterative learning algorithms and one of the most compact models.

  • Conference Article
  • Cite Count Icon 11
  • 10.1145/3307339.3342180
Automate the Peripheral Arterial Disease Prediction in Lower Extremity Arterial Doppler Study using Machine Learning and Neural Networks
  • Sep 4, 2019
  • Lena Ara + 3 more

This research work aims to automate the prediction of peripheral arterial diseases implied by the Lower Extremity Arterial Doppler (LEAD) studies by applying machine learning and artificial intelligence algorithms. This study is the first to use machine learning and artificial intelligence algorithms to analyze LEAD data for peripheral arterial disease prediction. Specifically, we employ a Convolutional Neural Network (CNN) to classify the waveform into three types. The classified waveforms are used as input to the learning algorithms for disease prediction. We evaluate two traditional machine learning algorithms as well as two neural networks to predict normal and three types of artery diseases: aortoiliac disease, femoral-popliteal arterial disease, and trifurcation disease. The hierarchical neural network model (HNN) is investigated to deal with imbalanced data set. The first level of the HNN predicts the normal from diseases. The remaining two neural networks are used to predict other diseases from the rest. HNN has achieved high F1 scores: 99% on the normal case, 97% on the aortoiliac disease, and 94% on the femoral-popliteal arterial disease and 89% on the trifurcation disease through 10-fold cross-validation. The comparison shows that HNN works better than multilayer perceptron, random forests, and SVM. The overall result demonstrates that machine learning and artificial intelligence algorithms can be developed for peripheral arterial diseases implied by the LEAD studies while reducing the reading variability in vascular laboratories.

  • Conference Article
  • Cite Count Icon 182
  • 10.1109/hpca.2016.7446050
TABLA: A unified template-based framework for accelerating statistical machine learning
  • Mar 1, 2016
  • Divya Mahajan + 6 more

A growing number of commercial and enterprise systems increasingly rely on compute-intensive Machine Learning (ML) algorithms. While the demand for these compute-intensive applications is growing, the performance benefits from general-purpose platforms are diminishing. Field Programmable Gate Arrays (FPGAs) provide a promising path forward to accommodate the needs of machine learning algorithms and represent an intermediate point between the efficiency of ASICs and the programmability of general-purpose processors. However, acceleration with FPGAs still requires long development cycles and extensive expertise in hardware design. To tackle this challenge, instead of designing an accelerator for a machine learning algorithm, we present TABLA, a framework that generates accelerators for a class of machine learning algorithms. The key is to identify the commonalities across a wide range of machine learning algorithms and utilize this commonality to provide a high-level abstraction for programmers. TABLA leverages the insight that many learning algorithms can be expressed as a stochastic optimization problem. Therefore, learning becomes solving an optimization problem using stochastic gradient descent that minimizes an objective function over the training data. The gradient descent solver is fixed while the objective function changes for different learning algorithms. TABLA provides a template-based framework to accelerate this class of learning algorithms. Therefore, a developer can specify the learning task by only expressing the gradient of the objective function using our high-level language. Tabla then automatically generates the synthesizable implementation of the accelerator for FPGA realization using a set of hand-optimized templates. We use Tabla to generate accelerators for ten different learning tasks targeted at a Xilinx Zynq FPGA platform. We rigorously compare the benefits of FPGA acceleration to multi-core CPUs (ARM Cortex A15 and Xeon E3) and many-core GPUs (Tegra K1, GTX 650 Ti, and Tesla K40) using real hardware measurements. TABLA-generated accelerators provide 19.4x and 2.9x average speedup over the ARM and Xeon processors, respectively. These accelerators provide 17.57x, 20.2x, and 33.4x higher Performance-per-Watt in comparison to Tegra, GTX 650 Ti and Tesla, respectively. These benefits are achieved while the programmers write less than 50 lines of code.

  • Research Article
  • Cite Count Icon 49
  • 10.1007/s10648-024-09862-5
Screening Smarter, Not Harder: A Comparative Analysis of Machine Learning Screening Algorithms and Heuristic Stopping Criteria for Systematic Reviews in Educational Research
  • Feb 8, 2024
  • Educational Psychology Review
  • Diego G Campos + 8 more

Systematic reviews and meta-analyses are crucial for advancing research, yet they are time-consuming and resource-demanding. Although machine learning and natural language processing algorithms may reduce this time and these resources, their performance has not been tested in education and educational psychology, and there is a lack of clear information on when researchers should stop the reviewing process. In this study, we conducted a retrospective screening simulation using 27 systematic reviews in education and educational psychology. We evaluated the sensitivity, specificity, and estimated time savings of several learning algorithms and heuristic stopping criteria. The results showed, on average, a 58% (SD = 19%) reduction in the screening workload of irrelevant records when using learning algorithms for abstract screening and an estimated time savings of 1.66 days (SD = 1.80). The learning algorithm random forests with sentence bidirectional encoder representations from transformers outperformed other algorithms. This finding emphasizes the importance of incorporating semantic and contextual information during feature extraction and modeling in the screening process. Furthermore, we found that 95% of all relevant abstracts within a given dataset can be retrieved using heuristic stopping rules. Specifically, an approach that stops the screening process after classifying 20% of records and consecutively classifying 5% of irrelevant papers yielded the most significant gains in terms of specificity (M = 42%, SD = 28%). However, the performance of the heuristic stopping criteria depended on the learning algorithm used and the length and proportion of relevant papers in an abstract collection. Our study provides empirical evidence on the performance of machine learning screening algorithms for abstract screening in systematic reviews in education and educational psychology.

  • Research Article
  • Cite Count Icon 51
  • 10.1049/trit.2018.1007
Adaptive PID controller based on Q ‐learning algorithm
  • Nov 14, 2018
  • CAAI Transactions on Intelligence Technology
  • Qian Shi + 3 more

An adaptive proportional–integral–derivative (PID) controller based on Q ‐learning algorithm is proposed to balance the cart–pole system in simulation environment. This controller was trained using Q ‐learning algorithm and implemented the learned Q ‐tables to change the gains of linear PID controllers according to the state of the system during the control process. The adaptive PID controller based on Q ‐learning algorithm was trained from a set of fixed initial positions and was able to balance the system starting from a series of initial positions that are different from the ones used in the training session, which achieved equivalent or even better performances in comparison with the conventional PID controller and the controller only uses Q ‐learning algorithm. This indicates the advantage of the adaptive PID controller based on Q ‐learning algorithm both in the generality of balancing the cart–pole system from a relatively wide range of initial positions and in the stabilisability of achieving smaller steady‐state error.

  • Research Article
  • Cite Count Icon 3
  • 10.1137/16m1076666
Sampling Correctors
  • Jan 1, 2018
  • SIAM Journal on Computing
  • Clément L Canonne + 2 more

In many situations, sample is obtained from a noisy or imperfect source. In order to address such corruptions, this paper introduces the concept of a sampling corrector. Such algorithms use structure that the distribution is purported to have, in order to allow one to make on-the-fly corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy and the end user. We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks. As a first step, we show how to design sampling correctors using proper learning algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case when also granted query access to the cumulative distribution function. We also obtain sampling correctors for monotonicity without this stronger type of access, provided that the distribution be originally very close to monotone (namely, at a distance $O(1/\log^2 n)$). In addition to that, we consider a restricted error model that aims at capturing missing data corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach. We also consider the question of whether an additional source of independent random bits is required by sampling correctors to implement the correction process.

  • Conference Article
  • 10.1109/gcis.2009.433
Learning the Similarity Preserving Principal Curves
  • Jan 1, 2009
  • Mingming Sun + 3 more

The theory of Similarity Preserving Principal Curves (also called Principal Curves with Feature Continuity) has been studied. In this paper, we proposed a practical learning algorithm for learning the Similarity Preserving Principal Curves for a general data set. Furthermore, we proposed the concept and learning algorithms of the second-order Similarity Preserving Principal Curves. The learning algorithms are employed to extract efficient features for the data representation tasks. Experimental results show the ability of the proposed learning model and algorithms.

Save Icon
Up Arrow
Open/Close