Concurrent operations on talk

  • TL;DR
  • Abstract
  • Literature Map
  • Similar Papers
TL;DR

This paper emphasizes the importance of analyzing concurrent operations in conversation for understanding pragmatics, highlighting their relevance to the study of conversational dynamics and pragmatic functions, though specific methodologies and empirical findings are not detailed in the abstract.

Abstract
Translate article icon Translate Article Star icon

The analysis of conversation has a strong relevance to the study of pragmatics. Thus in introducing the scope of

Similar Papers
  • Research Article
  • Cite Count Icon 17
  • 10.1136/bmj.j4244
Concurrent bariatric operations and association with perioperative outcomes: registry based cohort study
  • Sep 26, 2017
  • The BMJ
  • Jason B Liu + 8 more

Objective To determine whether perioperative outcomes differ between patients undergoing concurrent compared with non-concurrent bariatric operations in the USA.Design Retrospective, propensity score matched cohort study.Setting Hospitals in the US accredited...

  • Research Article
  • Cite Count Icon 32
  • 10.1097/sla.0000000000002358
Outcomes of Concurrent Operations
  • Sep 1, 2017
  • Annals of Surgery
  • Jason B Liu + 9 more

To determine whether concurrently performed operations are associated with an increased risk for adverse events. Concurrent operations occur when a surgeon is simultaneously responsible for critical portions of 2 or more operations. How this practice affects patient outcomes is unknown. Using American College of Surgeons' National Surgical Quality Improvement Program data from 2014 to 2015, operations were considered concurrent if they overlapped by ≥60 minutes or in their entirety. Propensity-score-matched cohorts were constructed to compare death or serious morbidity (DSM), unplanned reoperation, and unplanned readmission in concurrent versus non-concurrent operations. Multilevel hierarchical regression was used to account for the clustered nature of the data while controlling for procedure and case mix. There were 1430 (32.3%) surgeons from 390 (77.7%) hospitals who performed 12,010 (2.3%) concurrent operations. Plastic surgery (n = 393 [13.7%]), otolaryngology (n = 470 [11.2%]), and neurosurgery (n = 2067 [8.4%]) were specialties with the highest proportion of concurrent operations. Spine procedures were the most frequent concurrent procedures overall (n = 2059/12,010 [17.1%]). Unadjusted rates of DSM (9.0% vs 7.1%; P < 0.001), reoperation (3.6% vs 2.7%; P < 0.001), and readmission (6.9% vs 5.1%; P < 0.001) were greater in the concurrent operation cohort versus the non-concurrent. After propensity score matching and risk-adjustment, there was no significant association of concurrence with DSM (odds ratio [OR] 1.08; 95% confidence interval [CI] 0.96-1.21), reoperation (OR 1.16; 95% CI 0.96-1.40), or readmission (OR 1.14; 95% CI 0.99-1.29). In these analyses, concurrent operations were not detected to increase the risk for adverse outcomes. These results do not lessen the need for further studies, continuous self-regulation and proactive disclosure to patients.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/0020-0255(93)90126-7
Concurrent operations in multi-attribute linear hashing
  • Oct 15, 1993
  • Information Sciences
  • Ho Pao-Chung + 1 more

Concurrent operations in multi-attribute linear hashing

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/eumc.2016.7824533
Parallel combination of high-efficiency amplifiers with spurious rejection for concurrent multiband operation
  • Oct 1, 2016
  • Jun Enomoto + 4 more

An efficient concurrent multiband power amplifier configuration has been proposed for high-data-rate wireless communication systems. Single-band high-efficiency power amplifiers are designed by adding spurious rejection functions which are embedded in input and output fundamental-frequency matching circuits. And those amplifiers are connected in parallel. In this configuration, significant merits exist in comparison with usual dual-band or broadband amplifiers, especially with regard to distortion characteristics. To confirm this, a 4.5-/8.5-GHz-band GaN HEMT amplifier was fabricated, and it exhibited maximum drain efficiencies of 64% and 54% and maximum power added efficiencies of 61% and 41% at 4.49 GHz and 8.42 GHz, respectively, on a concurrent operation with a highly suppressed near-band spurious level of less than -38 dBc.

  • Research Article
  • Cite Count Icon 22
  • 10.1109/tmtt.2021.3091507
A Novel 3-Way Dual-Band Doherty Power Amplifier for Enhanced Concurrent Operation
  • Sep 1, 2021
  • IEEE Transactions on Microwave Theory and Techniques
  • Ruwaybih Alsulami + 6 more

This article presents the architecture and design methodology for a new type of dual-band Doherty power amplifier (DB-DPA), referred to as 3-Way DB-DPA, which consists of a main amplifier for each band and an auxiliary amplifier handling both bands. The 3-Way DB-DPA improves the average drain efficiency in concurrent dual-band operation compared to the traditional 2-Way DB-DPA, by avoiding early clipping in the main amplifiers, while benefiting from load—pulling from the auxiliary power amplifier (PA). This improvement is verified in theory and simulation at the current-source reference planes and in measurement with a fabricated 1.5- and 2-GHz dual-band PA. A statistical analysis using 2-D continuous-wave (CW) signals with long-term evolution (LTE) probability distribution functions (PDFs) is performed and demonstrated an improvement in the concurrent average efficiency by 15 percentage points compared to the conventional 2-Way DB-DPA. In nonconcurrent operation, the measured CW drain efficiency in the lower band (1.5 GHz) is 82.8% at peak and 66.6% at 9.6-dB backoff, and the measured CW drain efficiency in the upper band (2.0 GHz) is 70.0% at peak and 48.4% at 9.4-dB backoff. The CW concurrent-balanced drain efficiency reaches 66.2/52.0% in the 3-Way DB-DPA at 3-/6-dB backoff. In single-band operation at 1.5/2.0 GHz, the average power and average drain efficiency after linearization by digital predistortion (DPD) are 35.1/37.4 dBm and 65.0/53.7%, respectively, for an LTE signal with 10-MHz bandwidth and 6.1-dB peak-to-average power ratio (PAPR). In concurrent operation, the 3-Way DB-DPA is driven by two 10-MHz LTE uncorrelated signals at 1.5 GHz with 6.86-dB PAPR and at 2.0 GHz with 6.26-dB PAPR, and the average total power and average concurrent drain efficiency after DPD are 37.5 dBm and 54.24%, respectively.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/67387.67393
Taking concurrency seriously (position paper)
  • Sep 26, 1988
  • ACM SIGPLAN Notices
  • M Herlihy

I'd like to propose a challenge to language designers interested in concurrency: how well do your favorite constructs support highly-concurrent data structures? For example, consider a real-time system consisting of a pool of sensor and actuator processes that communicate via a priority queue in shared memory. Processes execute asynchronously. When a sensor process detects a condition requiring a response, it records the condition, assigns it a priority, and places the record in the queue. Whenever an actuator process becomes idle, it dequeues the highest priority item from the queue and takes appropriate action. The conventional way to prevent concurrent queue operations from interfering is to execute each operation as a critical section : only one process at a time is allowed to access the data structure. As long as one process is executing an operation, any other needing to access the queue must wait. Although this approach is widely used, it has significant drawbacks. Similar concerns arise even in systems not subject to real-time demands or failures. For example, process execution speeds may vary considerably if processors are multiplexed among multiple processes. If a process executing in a critical region takes a page fault, exhausts its quantum, or is swapped out, then other runnable processes needing to use that resource will be unable to make progress. An implementation of a concurrent object is wait-free if it guarantees that any process will complete any operation within a fixed number of steps, independent of the level of contention and the execution speeds of the other processes. To construct a wait-free implementation of the shared priority queue, we must break each enqueue or dequeue operation into a non-atomic sequence of atomic steps, where each atomic step is a primitive operation directly supported by the hardware, such as read, write, or fetch-and-add . To show that such an implementation is correct. It is necessary to show that (1) each operation's sequence of primitive steps has the desired effect (e.g., enqueuing or dequeuing an item) regardless of how it is interleaved with other concurrent operations, and (2) that each operation terminates within a fixed number of steps regardless of variations in speed (including arbitrary delay) of other processes. Support for wait-free synchronization requires genuinely new language constructs, not just variations on conventional approaches such as semaphores, monitors, tasks, or message-passing. I don't know what these constructs look like, but in this position paper, I would like to suggest some research directions that could lead, directly or indirectly, to progress in this area. We need to keep up with work in algorithms. To pick just one example, we now know that certain kinds of wait-free synchronization, e.g., implementing a FIFO queue from read/write registers, require randomized protocols in which processes flip coins to choose their next steps [3, 1]. The implications of such results for language design remain unclear, but suggestive. We also need to pay more attention to specification. Although transaction serializability has become widely accepted as the basic correctness condition for databases and certain distributed systems, identifying analogous properties for concurrent objects remains an active area of research [2].

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/cscwd.2012.6221859
Consistency maintenance based on the matching of topological entity
  • May 1, 2012
  • Xiaoxia Li + 3 more

Consistency maintenance is one of the most important problems in collaborative CAD systems. However, existing consistency maintenance mechanisms limit multi-user interaction. This paper presents a consistency maintenance method to gain a less-constrained multi-user interaction. First, the causal relation between modeling operations is preserved using the state vector. Then, the concurrent deletion operations are checked to decide if the current operation is masked. If not, the solution for topological entities' matching is adopted to deal with the operations that use topological entities. Then, for those operations which do not use topological entities, the corresponding mechanism is adopted according to their types. By these mechanisms, the commutative, masked and conflicted relations between the concurrent operations, are explored and the conflicts are solved. The experiments prove that our method can support less-constrained multi-user interaction.

  • Conference Article
  • Cite Count Icon 1
  • 10.1145/67386.67393
Taking concurrency seriously (position paper)
  • Jan 1, 1988
  • M Herlihy

I'd like to propose a challenge to language designers interested in concurrency: how well do your favorite constructs support highly-concurrent data structures? For example, consider a real-time system consisting of a pool of sensor and actuator processes that communicate via a priority queue in shared memory. Processes execute asynchronously. When a sensor process detects a condition requiring a response, it records the condition, assigns it a priority, and places the record in the queue. Whenever an actuator process becomes idle, it dequeues the highest priority item from the queue and takes appropriate action. The conventional way to prevent concurrent queue operations from interfering is to execute each operation as a critical section: only one process at a time is allowed to access the data structure. As long as one process is executing an operation, any other needing to access the queue must wait. Although this approach is widely used, it has significant drawbacks.It is not fault-tolerant. If one process unexpectedly halts in the middle of an operation, then other processes attempting to access the queue will wait forever. Although it may sometimes be possible to detect the failure and preempt the queue, such detection takes time, it may be unreliable, and it may be impossible to restore the data structure to a consistent state.Critical sections force faster processes to wait for slower processes. Such waiting may be particularly undesirable in heterogeneous architectures, where some processors may be much faster than others. For example, a fast actuator process should not have to remain idle whenever a much slower sensor process is enqueuing a new item. Such waiting is also undesirable if each processor is dedicated to a single process, where delaying a process means idling a valuable hardware resource.Similar concerns arise even in systems not subject to real-time demands or failures. For example, process execution speeds may vary considerably if processors are multiplexed among multiple processes. If a process executing in a critical region takes a page fault, exhausts its quantum, or is swapped out, then other runnable processes needing to use that resource will be unable to make progress.An implementation of a concurrent object is wait-free if it guarantees that any process will complete any operation within a fixed number of steps, independent of the level of contention and the execution speeds of the other processes. To construct a wait-free implementation of the shared priority queue, we must break each enqueue or dequeue operation into a non-atomic sequence of atomic steps, where each atomic step is a primitive operation directly supported by the hardware, such as read, write, or fetch-and-add. To show that such an implementation is correct. It is necessary to show that (1) each operation's sequence of primitive steps has the desired effect (e.g., enqueuing or dequeuing an item) regardless of how it is interleaved with other concurrent operations, and (2) that each operation terminates within a fixed number of steps regardless of variations in speed (including arbitrary delay) of other processes.Support for wait-free synchronization requires genuinely new language constructs, not just variations on conventional approaches such as semaphores, monitors, tasks, or message-passing. I don't know what these constructs look like, but in this position paper, I would like to suggest some research directions that could lead, directly or indirectly, to progress in this area. We need to keep up with work in algorithms. To pick just one example, we now know that certain kinds of wait-free synchronization, e.g., implementing a FIFO queue from read/write registers, require randomized protocols in which processes flip coins to choose their next steps [3, 1]. The implications of such results for language design remain unclear, but suggestive. We also need to pay more attention to specification. Although transaction serializability has become widely accepted as the basic correctness condition for databases and certain distributed systems, identifying analogous properties for concurrent objects remains an active area of research [2].

  • Research Article
  • Cite Count Icon 27
  • 10.1109/4.726569
A process-independent, 800-MB/s, DRAM byte-wide interface featuring command interleaving and concurrent memory operation
  • Jan 1, 1998
  • IEEE Journal of Solid-State Circuits
  • M.M Griffin + 4 more

An 800-MB/s/pin byte-wide interface DRAM is described that meets the bandwidth requirements for modern microprocessor systems. Clock recovery and I/O circuitry perform to specification across multiple DRAM manufacturers' processes. The clock-recovery circuitry is described in depth for areas that are sensitive to power-supply noise. I/O circuitry for preserving signal integrity in high-speed bussed systems is described. Design methodology that enables rapid simulation and verification of the design in each fabrication process is discussed. Logic that enables interleaved transactions with concurrent operation is detailed. Computer-aided-design tools for large aspect merged logic/memory are discussed. Last, measured results are summarized showing clock jitter, setup and hold timing, and period versus V/sub dd/ operation.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.scico.2017.03.008
Static analysis of cloud elasticity
  • Apr 7, 2017
  • Science of Computer Programming
  • Abel Garcia + 2 more

Static analysis of cloud elasticity

  • Research Article
  • Cite Count Icon 127
  • 10.1109/tpel.2015.2480122
Wireless Power Transfer with Concurrent 200 kHz and 6.78 MHz Operation in a Single Transmitter Device
  • Jan 1, 2015
  • IEEE Transactions on Power Electronics
  • Dukju Ahn + 1 more

This paper proposes a wireless power transfer (WPT) transmitter that can concurrently operate at 200 kHz and 6.78 MHz in order to simultaneously power two receivers operating with different frequency standards. Unlike a dual-resonant single-coil design, the use of two separate coils decouples the design for one frequency from the other, enabling independent selection of inductance and Q-factor to simultaneously maximize efficiency at both frequencies. The two coils then support separate coil drivers, enabling concurrent multistandard operation. Dual-band operation is achieved in the same area as an equivalent single-band design by placing a low-frequency coil within the geometry of a high-frequency coil, where the outer diameter of inner coil is sacrificed only by 1.2 cm in a 12.5 × 8.9-cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> design. Circuit analysis is presented to identify the eddy current between the two Tx coils and its associated loss, after which an eddy-current filter design is proposed. To validate the proposed design, a dual-mode transmitter, along with two receivers designed at 6.78 MHz and 200 kHz, respectively, have been fabricated. At 25-mm separation, the system is able to simultaneously deliver 9 and 7.4 W with efficiencies of 78% and 70.6% at 6.78 MHz and 200 kHz, respectively.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijoc.14.1.68.7708
Assembly-Line Scheduling with Concurrent Operations and Parallel Machines
  • Feb 1, 2002
  • INFORMS Journal on Computing
  • George J Kyparisis + 1 more

This paper addresses the assembly-line scheduling problem with concurrent operations per stage and parallel machines. This problem can be briefly described as the problem of sequencing a predetermined set of parts with known processing-time requirements through anr-station assembly line so that the makespan is minimized. A set of concurrent operations must be performed at each station on each part. This set of operations is usually partdependent since different parts have different processing requirements. In this paper, we construct schedules in quadratic time for this strongly NP-hard problem with an absolute performance guarantee independent of the number of jobs. This makes our schedules asymptotically optimal as the number of jobs becomes very large. The implementation of our schedules facilitates the efficient use of concurrent operations and prevents the assembly line from degenerating into performing operations serially, which would result in low production rates. We utilize compact vector summation techniques and first construct a schedule for the case where each concurrent operation is performed by a single machine. We then generalize our schedule to the case where each concurrent operation is performed by a set of identical parallel machines.

  • Research Article
  • 10.3785/j.issn.1008-973x.2012.12.001
Control method of concurrent operation for dismounting and assembling simulation in cooperative maintenance
  • Dec 1, 2012
  • Journal of ZheJiang University (Engineering Science)
  • Liu Zhen-Yu Zhou Si-Hang

To effectively resolve concurrent operation during process of virtual maintenance of the same object and accomplish cooperative dismounting,a concurrency control method of coordinating operation was proposed.The joint linkage matrix including information of joint type and joint axis is created in order to accomplish the preprocessing of assembly model.According to the judgment of conflict source,which consists of assembly model,assembly unit,property sort and property value,the concurrency operation is subdivided and the concurrency control process is established.Concurrency control methods based on the adaptive motion of assembly unit are used to deal with the concurrency operation on the assembly model with deadlock joints;while considering the assembly model without deadlock joints,the concurrency control methods based on the degree of manipulation ramification are applied.The engine assembly simulation by the concurrency operation,which includes assembling the piston-cylinder rod-crankshaft with multi-joint to cylinder,was applied to prevent unsuccessful assembling because of assembly interference.

  • Research Article
  • Cite Count Icon 2
  • 10.1504/ijnvo.2013.063045
Benefits and challenges of knowledge creation in concurrent virtual professional organisation formation and operation
  • Jan 1, 2013
  • International Journal of Networking and Virtual Organisations
  • Erno Salmela + 1 more

The study examines how concurrent virtual professional organisation VPO formation and operation affects knowledge creation. Usually virtual organisation formation and operation are discussed as consecutive phases. The phenomenon is studied by a qualitative research approach and a single case study. The studied case is a project preparation PP, the goal of which is to start a public research project. Knowledge creation plays an important role in PP. As a theoretical contribution, the article presents a dynamic model that connects the concurrent VPO formation and operation phases to knowledge creation. As a practical implication, the study suggests that in an uncertain and hectic environment, concurrent VPO formation and operation may be unnecessary. To manage this kind of a situation successfully, dynamic coordination competencies are needed.

  • Research Article
  • Cite Count Icon 21
  • 10.1109/tmtt.2019.2951147
Impact of the Input Baseband Terminations on the Efficiency of Wideband Power Amplifiers Under Concurrent Band Operation
  • Dec 1, 2019
  • IEEE Transactions on Microwave Theory and Techniques
  • Diogo R Barros + 3 more

This article presents a theoretical explanation of the efficiency degradation in wideband radio frequency power amplifiers (RFPAs) under concurrent dual-band operation that is still visible after eliminating all known causes related to the output matching network (OMN). First, the origin of this degradation, identified by the reduction of the PA's efficiency and linearity, is traced to the input baseband impedance terminations. Then, a theoretical model that describes the phenomenon and qualitatively predicts the efficiency variation under concurrent dual-band operation is presented using a simple model based on two-tone excitation. Finally, the proposed explanation is confirmed by comparing the efficiency performance of two 30-W GaN RFPAs of distinct instantaneous bandwidth determined by different input video-bandwidth terminations, under two-tone and concurrent dual-band operation. This article shows that, in addition to the adequate OMN design, an optimal input baseband impedance profile is also an important design constraint needed to keep the efficiency and linearity performance of RFPAs, for scenarios that require a wide instantaneous bandwidth.

Save Icon
Up Arrow
Open/Close