A Systems Re-engineering Case Study: Programming Robots with occam and Handel-C

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This paper introduces a case study exploring some of the legacy issues that may be faced when redeveloping a system. The case study is a robotics system programmed in occam and Handel-C, allowing us to draw comparisons between software and hardware implementations in terms of program architecture, ease of program code verification, and differences in the behaviour of the robot. The two languages used have been selected because of their model of concurrency and their relation to CSP. The case study contributes evidence that re-implementing a system from an abstract model may present implementation specific issues despite maintaining the same underlying program control structure. The paper identifies these problems and suggests a number of steps that could be taken to help mitigate some of the issues.

Similar Papers
  • Conference Article
  • Cite Count Icon 78
  • 10.1145/191995.192021
Software versus hardware shared-memory implementation: a case study
  • Apr 1, 1994
  • Alan L Cox + 5 more

We compare the performance of software-supported shared memory on a general-purpose network to hardware-supported shared memory on a dedicated interconnect.Up to eight processors, our results are based on the execution of a set of application programs on a SGI 4D/480 multiprocessor and on TreadMarks, a distributed shared memory system that runs on a Fore ATM LAN of DECstation-5000/240s. Since the DECstation and the 4D/480 use the same processor, primary cache, and compiler, the shared-memory implementation is the principal difference between the systems. Our results show that TreadMarks performs comparably to the 4D/480 for applications with moderate amounts of synchronization, but the difference in performance grows as the synchronization frequency increases. For applications that require a large amount of memory bandwidth, TreadMarks can perform better than the SGI 4D/480.Beyond eight processors, our results are based on execution-driven simulation. Specifically, we compare a software implementation on a general-purpose network of uniprocessor nodes, a hardware implementation using a directory-based protocol on a dedicated interconnect, and a combined implementation using software to provide shared memory between multiprocessor nodes with hardware implementing shared memory within a node. For the modest size of the problems that we can simulate, the hardware implementation scales well and the software implementation scales poorly. The combined approach delivers performance close to that of the hardware implementation for applications with small to moderate synchronization rates and good locality. Reductions in communication overhead improve the performance of the software and the combined approach, but synchronization remains a bottleneck.

  • Research Article
  • Cite Count Icon 7
  • 10.1002/j.1556-6676.1994.tb01704.x
A Concurrent (Versus Stage) Model for Conceptualizing and Representing the Counseling Process
  • Sep 10, 1994
  • Journal of Counseling & Development
  • Charles A Waehler + 1 more

The counseling process is presented both conceptually and visually as a collection of activities occurring simultaneously in varying degrees. This concurrent model differs from traditional discrete, sequential‐stage representations because it is more complex and less disjointed. The model better represents actual counselor‐client interactions, overcoming the shortcomings expressed by others regarding their own models. The concurrent model is more inclusive and integrated, yet flexible, and is discussed as a potential aid for training, supervision, practice, and research. A case example is included, which highlights the utility of the concurrent model in understanding the counseling process.

  • Research Article
  • Cite Count Icon 3
  • 10.11591/ijeecs.v18.i3.pp1331-1341
Algorithm development and hardware implementation for medical image compression system: a review
  • Jun 1, 2020
  • Indonesian Journal of Electrical Engineering and Computer Science
  • Noor Huda Ja’Afar + 1 more

<span>In the high-tech world, medical imaging is very important to diagnose and analyze illness inside human body. The increasing number of patients annually has continuously growth the amount of medical imaging data generated and directly causes a demand for data storage. Generally, medical images are rich with data, where these data are important for diagnosing purpose. However, some of the data represents redundant information and sometimes can be discarded. Thus, the research area on medical image compression dealing with three-dimensional (3-D) modalities need to be given more attention and exploration. The algorithm development using wavelet transform with software implementation are the famous topics explored among researchers, whilst fewer works have been done in utilizing curvelet transform in medical image compression. Along with that, very limited hardware implementation of 3-D medical image compression is discovered. In term of performance evaluation, most of the previous works conducted objective test compared with subjective test. To fill in this gap, medical image compression system will be reviewed, with the aim to identify the recent method used in medical image compression system. This paper thoroughly scrutinizes the recent advances in medical image compression mainly in terms of compression method, algorithm development with software and hardware implementations and performance evaluation. In conclusion, the overall picture of the medical image compression landscape, where most of the researchers more focused on algorithm development or software implementations without having the combination of software and hardware implementations.</span>

  • Research Article
  • 10.1145/3777453
Software and Hardware Implementations of a Cyber-Physical Firewall
  • Dec 2, 2025
  • ACM Transactions on Cyber-Physical Systems
  • Ryo Iijima + 2 more

This study evaluates software and hardware implementations based on the Cyber-Physical Firewall (CPFW) framework, which provides a flexible and generic access control mechanism for regulating malicious analog signals targeting cyber-physical systems. We describe the CPFW framework design and the implementation strategies for both software and hardware. The software implementation was performed on a Raspberry Pi, whereas the hardware implementation was performed on a Zybo Z7-10 SoC board. We evaluate the characteristics of each implementation, particularly for audio signals, and discuss the differences that arise between software and hardware implementations, along with the selection of an appropriate approach based on specific requirements. The evaluation results demonstrate that the software implementation of the CPFW framework has an overhead of 3.219 ms, whereas the hardware implementation has an overhead of 310 ns, indicating a difference in overhead of \(10^{4}\) . The hardware implementation resulted in only a 5.88% increase in resource utilization owing to the addition of CPFW circuits, indicating that the added circuit size is practical for real-world applications.

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-3-030-00253-4_16
Synthesis and Optimization of Green Fuzzy Controllers for the Reactors of the Specialized Pyrolysis Plants
  • Sep 30, 2018
  • Oleksiy Kozlov + 3 more

This paper presents the developed by the authors generalized step-by-step method of synthesis and optimization of green fuzzy controllers (FC) for the automatic control systems (ACS) of the reactor’s temperature of the specialized pyrolysis plants (SPP). The proposed method gives the opportunity to synthesize and optimize Mamdani type green FCs of the temperature modes of the SPPs reactors that provide (a) high accuracy and quality indicators of temperature control, (b) low energy consumption in the process of functioning as well as (c) relatively simple software and hardware implementation. The initial synthesis of the structure and parameters of green FCs is implemented on the basis of expert assessments and recommendations. Their further optimization for improving the quality indicators, reducing energy consumption and simplification of soft/hardware realization is carried out using specific optimization procedures by means of mathematical programming methods. In order to study and validate the effectiveness of the developed method the design of the Mamdani type green FC for the temperature ACS of the pyrolysis reactor of the experimental SPP has been carried out in this work. The developed green FC has a relatively simple hardware and software implementation as well as allows to achieve high quality indicators of temperature modes control at a sufficiently low energy consumption, that confirms the high efficiency of the proposed method.

  • Conference Article
  • Cite Count Icon 41
  • 10.23919/fpl.2017.8056808
Comparison of hardware and software implementations of selected lightweight block ciphers
  • Sep 1, 2017
  • William Diehl + 4 more

Lightweight block ciphers are an important topic of research in the context of the Internet of Things (IoT). Current cryptographic contests and standardization efforts seek to benchmark lightweight ciphers in both hardware and software. Although there have been several benchmarking studies of both hardware and software implementations of lightweight ciphers, direct comparison of hardware and software implementations is difficult due to differences in metrics, measures of effectiveness, and implementation platforms. In this research, we facilitate this comparison by use of a custom lightweight reconfigurable processor. We implement six ciphers, AES, SIMON, SPECK, PRESENT, LED and TWINE, in hardware using register transfer level (RTL) design, and in software using the custom reconfigurable processor. Both hardware and software implementations are instantiated in identical Xilinx Kintex-7 FPGAs, which enables direct comparison of throughput, area, throughput-to-area (TP/A) ratio, power, and energy. Results show that TWINE and AES have the highest TP/A ratios for hardware and software implementations, respectively, assuming an area target of 300–450 LUTs. In terms of direct comparison, software implementations on tailored reconfigurable processers generally use less power — especially where reconfigurable instruction set extensions are permitted. However, custom hardware implementations have higher throughput and energy-efficiency than software implementations on the same platform.

  • Research Article
  • Cite Count Icon 45
  • 10.1147/rd.326.0727
Optimal hardware and software arithmetic coding procedures for the Q-Coder
  • Nov 1, 1988
  • IBM Journal of Research and Development
  • J L Mitchell + 1 more

The Q-Coder is an important new development in arithmetic coding. It combines a simple but efficient arithmetic approximation for the multiply operation, a new formalism which yields optimally efficient hardware and software implementations, and a new form of probability estimation. This paper describes the concepts which allow different, yet compatible, optimal software and hardware implementations. In prior binary arithmetic coding algorithms, efficient hardware implementations favored ordering the more probable symbol (MPS) above the less probable symbol (LPS) in the current probability interval. Efficient software implementation required the inverse ordering convention. In this paper it is shown that optimal hardware and software encoders and decoders can be achieved with either symbol ordering. Although optimal implementation for a given symbol ordering requires the hardware and software code strings to point to opposite ends of the probability interval, either code string can be converted to match the other exactly. In addition, a code string generated using one symbol-ordering convention can be inverted so that it exactly matches the code string generated with the inverse convention. Even where bit stuffing is used to block carry propagation, the code strings can be kept identical.

  • Research Article
  • Cite Count Icon 19
  • 10.34069/ai/2020.27.03.24
Reduced hardware costs with software and hardware implementation of digital methods multistage discrete Fourier transform on programmable logic devices
  • Mar 21, 2020
  • Revista Amazonia Investiga
  • Adeliya Yu Burova + 2 more

Let us consider questions, which are connected to the research of terms of hardware and software implementation digital signal processing (DSP) methods. Theoretical basis of this research are methods of non-recursive difference of digital filtration with integer difference coefficients different orders of difference and methods of multistage discrete Fourier transform (DFT) based on such digital filtration. The purpose of the study is the research and formalization of necessary and sufficient condition of lowering hardware costs in hardware and software implementation of methods multistage DFT of digital signals on programmable logic devices (PLD). For reaching the research goal there are used methods of direct search and comparative analysis of results of such realization of methods of multi-stage DFT of digital multi-band signals, while filtering these signals, which are based on their non-recursive difference digital filtering with integer difference values coefficients and different orders of magnitude of difference. There are described abilities and specialties of PLD, which are built using architecture of a coarse-grained or fine-grained architecture or using combined architecture, which connects the convenience of implementing digital processing algorithms signals on the basis of tables of code conversion and reconfigurable memory modules. It is clear that a necessary and sufficient condition of lowering hardware costs in terms of hardware and software realization of methods for multi-stage DFT of digital signals on PLD is the triviality of meanings of integer difference coefficients of a non-recursive difference digital high difference orders’ filtration, which ensure this information. There is mentioned a formula, which allows making such condition. The practical significance of the research results consists of defining the necessary and sufficient condition of lowering hardware costs in terms of hardware and software implementation on PLD methods of multi-stage DFT signals based on their non-recursive digital difference filtering with integer values differential coefficients of various orders of magnitude difference. The novelty of research results lies in formalization of this condition. The reliability of the research results confirms their compliance with the results of well-known developments of DSP methods.

  • Research Article
  • 10.1016/j.vlsi.2017.08.002
Several weaknesses of the implementation for the theoretically secure masking schemes under ISW framework
  • Sep 1, 2017
  • Integration
  • Yanbin Li + 3 more

Several weaknesses of the implementation for the theoretically secure masking schemes under ISW framework

  • Book Chapter
  • 10.1007/978-3-540-37256-1_75
Spatial Reasoning for Collision Detection and Hardware Implementation
  • Jan 1, 2006
  • Chirag Nepal + 3 more

Spatial reasoning is a core constituent in physical simulation, robotics, computer animation, computer-aided design, and geographic information systems. Many problems in these areas involve contact analysis and collision detection between static and/or moving objects. Due to its wide range of applications, collision detection between objects has been studied in various fields, but collision detection is still considered a major computational bottleneck. We classified collision detection problems into fourteen cases and implemented those using graphics hardware. For efficient collision detection, the algorithm uses various forms of bounding volumes, which are an approximate but efficient mechanism, and program codes are optimized. Our algorithm also produces the intersection part inside an object as well as the interaction point and collision time. We tested both software implementation and hardware implementation on the flight path problem with an actual satellite picture of Seoul, which was represented in a polygon mesh with 250,000 triangle lists. Experimental results demonstrated that hardware implementation was up to 70 times faster than software implementation and that code optimization and hardware implementation can significantly speed up the collision detection process.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iconip.1999.845657
Performance comparison of correlation matrix memory implementations
  • Nov 16, 1999
  • J Young + 2 more

This paper compares the performance of software and hardware implementations of binary correlation matrix memory (CMM). CMM is a simple, one-layer neural network with a Hebbian learning rule which offers excellent speed and scalability advantages. CMM building blocks form the basis of the AURA neural network system which has been applied to a broad range of practical problems. The paper presents the results of a performance comparison between recent software and hardware implementations of binary CMM. The results show that the hardware implementation provides a best-case speed-up of 50 over the software implementation. Finally, some areas for further improvement in the hardware implementation are identified.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/spl.2009.4914898
Parameterized hardware design on reconfigurable computers: An image registration case study
  • Apr 1, 2009
  • Miaoqing Huang + 3 more

Reconfigurable computers (RCs) with hardware (FPGA) co-processors can achieve significant performance improvement compared to traditional computers for certain categories of applications. The potential amount of speedup an RC can deliver depends on the intrinsic parallelism of the target application as well as the characteristics of the target platform. In this paper, we use image registration implementation as a case study to show how a hardware implementation is parameterized by co-processor architecture, particularly the local memory layout. Image registration is a fundamental task in image processing used to match two or more pictures taken at different times, from different sensors, or from different viewpoints. One of several basic transformations in image registration is rigid-body transformation, which is composed of a combination of a rotation thetas, a translation (t <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">x</sub> ,t <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">y</sub> ), and a scale change (s). In this work, rigid-body transformation is applied on the test image to register it with the reference image; and correlation coefficient is used as the similarity metric between the two images. Two different algorithms, exhaustive search algorithm and discrete wavelet transform (DWT)-based search algorithm, are implemented on hardware (i.e., FPGA device on Cray XD1 reconfigurable computer). The hardware implementation of exhaustive search algorithm is 10times faster than the software implementation. The performance improvement of DWT-based search algorithm in hardware is roughly 2 folds compared to the corresponding software implementation.

  • Conference Article
  • Cite Count Icon 35
  • 10.1109/rsp.2006.34
RTOS Scheduler Implementation in Hardware and Software for Real Time Applications
  • Jun 14, 2006
  • M Vetromille + 4 more

In order to enhance performance and improve predictability of the real time systems, implementing some critical operating system functionalities, like time management and task scheduling, in software and others in hardware is an interesting approach. Scheduling decision for real-time embedded software applications is an important problem in real-time operating system (RTOS) and has a great impact on system performance. In this paper, we evaluate the pros and cons of migrating RTOS scheduler implementation from software to hardware. We investigate three different RTOS scheduler implementation approaches: (i) implemented in software running in the same processor of the application tasks, (ii) implemented in software running in a co-processor, and (iii) implemented in hardware, while application tasks are running on a processor. We demonstrate the effectiveness of each approach by simulating and analyzing a set of benchmarks representing different embedded application classes.

  • Research Article
  • Cite Count Icon 10
  • 10.1145/192007.192021
Software versus hardware shared-memory implementation
  • Apr 1, 1994
  • ACM SIGARCH Computer Architecture News
  • A L Cox + 5 more

We compare the performance of software-supported shared memory on a general-purpose network to hardware-supported shared memory on a dedicated interconnect.Up to eight processors, our results are based on the execution of a set of application programs on a SGI 4D/480 multiprocessor and on TreadMarks, a distributed shared memory system that runs on a Fore ATM LAN of DECstation-5000/240s. Since the DECstation and the 4D/480 use the same processor, primary cache, and compiler, the shared-memory implementation is the principal difference between the systems. Our results show that TreadMarks performs comparably to the 4D/480 for applications with moderate amounts of synchronization, but the difference in performance grows as the synchronization frequency increases. For applications that require a large amount of memory bandwidth, TreadMarks can perform better than the SGI 4D/480.Beyond eight processors, our results are based on execution-driven simulation. Specifically, we compare a software implementation on a general-purpose network of uniprocessor nodes, a hardware implementation using a directory-based protocol on a dedicated interconnect, and a combined implementation using software to provide shared memory between multiprocessor nodes with hardware implementing shared memory within a node. For the modest size of the problems that we can simulate, the hardware implementation scales well and the software implementation scales poorly. The combined approach delivers performance close to that of the hardware implementation for applications with small to moderate synchronization rates and good locality. Reductions in communication overhead improve the performance of the software and the combined approach, but synchronization remains a bottleneck.

  • Single Report
  • 10.2172/348859
Process development work plan for waste feed delivery system
  • Apr 2, 1998
  • I.G Papp

This work plan defines the process used to develop project definition for Waste Feed Delivery (WFD). Project definition provides the direction for development of definitive design media required for the ultimate implementation of operational processing hardware and software. Outlines for the major deliverables are attached as appendices. The implementation of hardware and software will accommodate requirements for safe retrieval and delivery of waste currently stored in Hanford`s underground storage tanks. Operations and maintenance ensure the availability of systems, structures, and components for current and future planned operations within the boundary of the Tank Waste Remediation System (TWRS) authorization basis.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.