Databelt: A continuous data path for serverless workflows in the 3D compute continuum
Databelt: A continuous data path for serverless workflows in the 3D compute continuum
- Research Article
- 10.5351/csam.2014.21.6.521
- Nov 30, 2014
A stochastic Gompertz diffusion model for tumor growth is a topic of active interest as cancer is a leadingcause of death in Korea. The direct maximum likelihood estimation of stochastic differential equations wouldbe possible based on the continuous path likelihood on condition that a continuous sample path of the processis recorded over the interval. This likelihood is useful in providing a basis for the so-called continuous recordor infill likelihood function and infill asymptotic. In practice, we do not have fully continuous data except afew special cases. As a result, the exact ML method is not applicable. In this paper we proposed a method ofparameter estimation of stochastic Gompertz differential equation via Markov chain Monte Carlo methods that isapplicable for several data structures. We compared a Markov transition data structure with a data structure thathave an initial point.Keywords: Stochastic diffusion. Gompertz growth model, tumor growth, Bayesian, Markov datastructure, sparse data structure.
- Research Article
14
- 10.1109/access.2019.2937672
- Jan 1, 2019
- IEEE Access
Robust end-to-end connectivity in vehicular environments has been a daunting problem standing in the way of the development and provisioning of novel in-transit data communication services envisioned to incur remarkable enhancements on all of the: a) road safety, b) environment and c) welfare of travelling passengers. The establishment of continuous multi-hop data communication paths among arbitrary pairs of vehicular nodes is severely affected by the natural vehicular traffic flow's inherent limitations and obstacles that have been widely investigated in the literature. This present work proposes a novel Vehicular Mobility Management (VMM) scheme that has the objective of regulating the vehicles' navigational parameters in such a way to steadily line these vehicles up in proximity of each other with space headways that do not exceed the coverage range of their respective OnBoard Units. This will allow for establishing robust and long-lived communication links connecting the vehicles together; hence, increasing the probability of existence of multi-hop paths between arbitrary pairs of vehicular nodes. A Mathematical model is formulated herein in order to evaluate the performance of VMM in terms of multiple Qualityof-Service (QoS) metrics (e.g. average end-to-end data delivery delay, average throughput, probability of existence of an end-to-end path, etc). A simulation framework is then established for the purpose of verifying the correctness and validity of the proposed model and gauge the merit and advantages of the proposed VMM scheme. The reported results constitute a tangible proof of the validity of the proposed model as well as of the benefits emanating from vehicular flow control in the context of a highly promising step towards the realization of the embraced Internet of Vehicles (IoV).
- Conference Article
2
- 10.1109/glocom.2018.8647645
- Dec 1, 2018
Establishing a continuous multi-hop data communication path among an arbitrary pair of vehicular nodes is severely affected by the natural vehicular traffic's restrictive mobility dynamics (e.g. flow rate, speed, direction of movement, etc). This paper proposes a novel Vehicular Traffic Control (VTC) scheme that has the objective of regulating the vehicles' speeds in a way to get them steadily navigating in the proximity of each other at inter-vehicular distances that do not exceed their respective OnBoard Units' ranges. This triggers the formation of robust and long-lived communication links connecting the vehicles together; hence, increasing the path availability probability. A mathematical framework is established to evaluate the performance of VTC in terms of crucial Quality-of-Service (QoS) metrics such as the path availability and average end-to-end packet delivery delay. The validity and accuracy of the proposed model is verified through simulations. The reported results constitute a tangible proof of the benefits/merit of VTC in the context of a vehicular networking scenario characterized by low-to-medium flow rates.
- Research Article
13
- 10.1109/tns.2019.2903646
- Jul 1, 2019
- IEEE Transactions on Nuclear Science
The main role of the ITER Radial Neutron Camera (RNC) diagnostic is to measure in real-time the plasma neutron emissivity profile at high peak count rates for a time duration up to 500 s. Due to the unprecedented high performance conditions and after the identification of critical problems, a set of activities have been selected, focused on the development of high priority prototypes, capable to deliver answers to those problems before the final RNC design. This paper presents one of the selected activities: the design, development and testing of a dedicated FPGA code for the RNC Data Acquisition prototype. The FPGA code aims to acquire, process and store in real-time the neutron and gamma pulses from the detectors located in collimated lines of sight viewing a poloidal plasma section from the ITER Equatorial Port Plug 1. The hardware platform used was an evaluation board from Xilinx (KC705) carrying an IPFN FPGA Mezzanine Card (FMC-AD2-1600) with 2 digitizer channels of 12-bit resolution sampling up to 1.6 GSamples/s. The code performs the proper input signal conditioning using a down-sampled configuration to 400 MSamples/s, apply dedicated algorithms for pulse detection, filtering and pileup detection, and includes two distinct data paths operating simultaneously: i) the event-based data-path for pulse storage; and ii) the real-time processing, with dedicated algorithms for pulse shape discrimination and pulse height spectra. For continuous data throughput both data-paths are streamed to the host through two distinct PCIe x8 Direct Memory Access (DMA) channels.
- Conference Article
25
- 10.1109/icme.2004.1394686
- Jun 27, 2004
Central to capturing a trip is knowing where you were, and when you were there. Combining continuous path data with media (path-enhanced media or PEM) offers substantial advantages over the previous approach of tagging individual media with time and location. Prototype systems, collectively called PathMarker, are used for gathering, editing, presenting and browsing PEM. We have developed: (1) a methodology for gathering PEM with off-the-shelf hardware; (2) software for automatic conversion of the raw path data and media into an application independent XML representation; (3) two example PEM applications. The first application provides map-overlaid trip editing, presentation and browsing. The second application provides a 3D immersive environment with digital elevation maps for automatic trip flybys and for browsing. Experience with a number of recorded trips confirms that PathMarker systems seem to capture the essence of a trip.
- Conference Article
3
- 10.1109/robot.1998.681423
- May 16, 1998
This paper presents a path planning method based on gradient navigation maps merging, for planetary rovers. The trajectories are calculated in continuous mode and perception data are provided by a stereo bench which captures the image pairs during rover movements. Path planning algorithms have been designed and tested using a fully functional robot simulator. A first implementation of the method is being tested with a Marsokhod type chassis equipped with a stereo bench designed to make snapshots when the robot is stopped. We give some details about performances in terms of CPU time and memory space required for such an implementation. Finally, an on going experiment involving our IARES chassis is summarily described.
- Research Article
21
- 10.1007/s12652-020-02606-7
- Oct 31, 2020
- Journal of Ambient Intelligence and Humanized Computing
Mining the frequent pattern deals with the finding patterns in large set of data, subsequences and substructures that occur in a database frequently. Likewise, We can use Frequent pattern mining for MANET nodes in order to identify the paths which are participated in frequent data transaction among the various Mobile adhoc network nodes. The network data stream is a long and continuous sequence of data sets transmitted over the network. The OCA (Online Combinatorial Approximation) algorithm is used in the data stream for mining online data. The processing time of OCA was much less and accuracy of its approximate result was quite high like other traditional mining methods. The Data Path Combinatorial Approximation (DPCA) algorithm deals with a frequent pathset mining over the MANET data flow. The pathset is generation of path from the set of paths on any node which are provided paths to various other nodes participating in the data transmission. The mining algorithm is based on Approximate Inclusion–Exclusion technique. Without continual path scanning, approximate counts are calculated for the pathsets. Skip and complete technique and group count technique were combined together and integrated into the DPCA algorithm to improve the MANET performance in terms of identifying fool around (misbehaving) nodes.
- Conference Article
2
- 10.1109/cesys.2016.7889972
- Oct 1, 2016
In mobile ad hoc network (MANET), a node's energy may weaken or the node may move out of communication range without giving any prior notice to its neighbors, causing changes in topology consequently, that may extensively degrade the performance of a routing protocol. Changes in topology due to mobility and energy drain produce intermittent network connections. Continual path finding increases network overheads and delay. This proposed Loyalty Pair Neighbors Selection based Adaptive Re-transmission Reduction Routing in MANET (LPNS) protocol is designed to enhance the loyal neighbor node selection and construct the stable path by minimizing the retransmission of routing packets and also energy. It initiates transmission delay to re-transmit the routing packets through the node that have more number of loyalty pair neighbors set (LPNSS). Later, transmission range (TR), re-transmission feasibility and edge facet (EF) are computed to establish data path through high signal strength nodes (SSN). Finally, the network lifetime is improved by establishing stable route which in turn reduces control overhead, delay and energy consumption.
- Research Article
- 10.2352/issn.2169-4451.2003.19.1.art00103_2
- Jan 1, 2003
- NIP & Digital Fabrication Conference
In a digital printing system, it is necessary to equip with a high-performance image codec module for the requirement of high image resolution and processing speed. A DWT (Discrete Wavelet Transform)-based image encoding and decoding technique that is commonly adopted for high-performance image application is proposed. A parallel processing mode, that uses an arbiter to make a continuous data propagation between DWT and multiple Entropy Coders, is the kernel idea of the technique. In this scheme, the role of the arbiter is to decide the propagation path of the Code Block data between DWT and one of the Entropy Coders. In addition, an efficiently managed strategy of the Code Block is another crucial design for improving the encoding and decoding speed. As a result, a high-performance hardware codec and speed-tuning module are implemented. The flexibility and high performance can be achieved with this implementation by manipulating optimal balance between cost and performance in various applications of image codec.
- Research Article
10
- 10.1016/j.measurement.2020.107550
- Jan 31, 2020
- Measurement
Path generation and optimization for DBB measurement with continuous data capture
- Conference Article
10
- 10.1109/isscc.2016.7417970
- Jan 1, 2016
The consumer electronics market demands high-speed and low-power serial data interfaces. The injection locked oscillator (ILO) based clock and data recovery (CDR) circuit [1–2], is a well-known solution for these demands. The typical solution has at least two oscillators: a master and one or more slaves. The master, a replica of the data path ILO, is part of a phase locked loop (PLL) used to correct the oscillator free-running frequency (FRF). The slave ILO phase locks to the incoming data but uses the frequency control from the master. Any FRF difference between the master and slave, such as that caused by PVT or mismatch, reduces the receiver performance. One solution to the reduced performance [3] uses burst data and corrects the FRF between bursts. However, for continuous data, injection forces the recovered clock frequency to match the incoming data rate, masking any FRF error from the frequency detector. Existing solutions [4–5] use a phase detector (PD) to measure the FRF. However, any static phase offset between the PD lock point and the ILO lock point causes the frequency control algorithm to converge incorrectly. Static phase offset can be caused by mismatch, PVT, or layout.
- Book Chapter
1
- 10.1007/978-981-16-4095-7_16
- Oct 20, 2021
In this chapter, we introduce aspects of applying data-compression techniques. First, we study the background of recent communication data paths. The focus of this chapter is a fast lossless data-compression mechanism that handles data streams completely. A data stream comprises continuous data with no termination of the massive data generated by sources such as movies and sensors. In this chapter, we introduce LCA-SLT and LCA-DLT, which accept the data streams, as well as several implementations of these stream-based compression techniques. We also show optimization techniques for optimal implementation in hardware.
- Research Article
5
- 10.1007/s00034-020-01472-0
- Jun 19, 2020
- Circuits, Systems, and Signal Processing
‘Fast Fourier transform’ (FFT), being a prevalent algorithm for the proficient computation of ‘discrete Fourier transform,’ constitutes one of the major sub-modules in numerous real-time signal processing systems. In this article, a new approach of CORDIC-based high-radix FFT architecture has been demonstrated. Having identified the complex rotation as the most time-consuming elementary operation of FFT, the number of such complex rotations has been optimized by adopting radix-8-based FFT computation. To add to this, CORDIC is employed to realize the complex rotation, keeping aside its multiplier–accumulator (MAC)-based counterpart, for further economizing the VLSI implementation of the proposed FFT architecture. Furthermore, the requirement of CORDIC blocks for last three stages of radix-8 FFT computation has totally been mitigated by utilizing SCALE blocks as the rotation in those stages can be expressed in terms of $$\pi /4$$ or its multiples. RAM is arranged in the form of memory banks to provide parallel data path operations, and RAM switching is performed in between stages for sustaining continuous data flow circumventing data access hazards. The throughput of the proposed radix-8 architecture is eight outputs per clock cycle, while the maximum clock frequency is limited only by the propagation delay of an adder. Hardware utilization and comparative performance evaluation have been reported to prove the proposed architecture’s supremacy. Our proposed prototype radix-8 architecture has been successfully implemented on Zynq UltraScale+ FPGA using Xilinx Vivado 18.2 software for verifying its feasibility in practical applications.
- Book Chapter
3
- 10.1007/978-3-319-29006-5_11
- Jan 1, 2016
In order to treat BigData efficiently, the communication speed of the inter or the intra data path equipped on high performance computing systems that needs to treat BigData management has been reaching to very high speed. In spite of fast increasing of the BigData, the implementation of the data communication path has become complex due to the electrical difficulties such as noises, crosstalks and reflections of the high speed data connection via a single cupper-based physical wire. This paper proposes a novel hardware solution to implement it by applying a stream-based data compression algorithm called the LCA-DLT. The compression algorithm is able to treat continuous data stream without exchanging the symbol lookup table among the compressor and the decompressor. The algorithm includes a dynamic frequency management of data patterns. The management is implemented by a dynamic histogram creation optimized for hardware implementation. When the dedicated communication protocol is combined with the LCA-DLT, it supports remote data migration among the computing systems. This paper describes the algorithm design and its hardware implementation of the LCA-DLT, and also shows the compression performance including the required hardware resources.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.