Architecture of a Database System
Database Management Systems (DBMSs) are a ubiquitous and critical component of modern computing, and the result of decades of research and development in both academia and industry. Historically, DBMSs were among the earliest multi-user server systems to be developed, and thus pioneered many systems design techniques for scalability and reliability now in use in many other contexts. While many of the algorithms and abstractions used by a DBMS are textbook material, there has been relatively sparse coverage in the literature of the systems design issues that make a DBMS work. This paper presents an architectural discussion of DBMS design principles, including process models, parallel architecture, storage system design, transaction system implementation, query processor and optimizer architectures, and typical shared components and utilities. Successful commercial and open-source systems are used as points of reference, particularly when multiple alternative designs have been adopted by different groups.
- Research Article
- 10.11648/j.ajomis.20170202.12
- Jan 23, 2017
Database Management Systems (DBMSs) is a critical component of modern computing of research and development in both academia and industry. Database Management Systems were among the earliest multi-user server systems to be developed and thus pioneered many systems design techniques for scalability and reliability now in use in many other contexts. Different resources and information between different devices are located in different places always based on internet connection are shared by Cloud computing. Additionally, cloud DBMS is a database management system which acts through cloud computing. The number of these DBMS which act through cloud computing is expected to increase in the future which is worth mentioning. This paper presents an architectural discussion of DBMS design principles including process models parallel architecture, query processor and optimizer architectures and typical shared components and utilities. Open-source systems are used as points of reference, when multiple alternative designs have been adopted by different groups. Additionally, in this paper, also discussed about DBMS advantages and disadvantages, and focus on the way how to offer a cloud DBMS as one of the best services. The research focus on three main characteristics of cloud computing which are considered as the most worried issues of cloud platform and review cloud database challenges such as: internet speed, multi-tenancy, privacy and security. This Paper thus express the need for a new DBMS designed specifically for cloud computing environments.
- Research Article
- 10.12694/scpe.v4i3.242
- Jan 3, 2001
- Scalable Computing Practice and Experience
Mahdi Abdelguerfi and Kam-Fai Wong (editors) IEEE Computer Society, Los Alamitos, CA, 1998, 232 pages ISBN 0-8186-8398-8, $30.00 Members / $40.00 List This book is a collection of specialized reports on parallel data base techniques. It is written by a group of experts in modern use and design of parallel database systems and architectures. The purpose of the book is to inform the designers and users of large databases systems on the existing capabilities of software and hardware to accommodate parallelization of database management systems (DBMS) and the authors have the merit of communicating their results in a highly technical and very straightforward manner. After a brief introduction the book is structured in three main parts: Request manager, Parallel machine architecture and Partitioned Data Store. The parallelism of databases naturally arises from their underlying data model and in particular relational data models (RDM) are well subjected to parallelization. Parallelism in RDM is mainly acquired via the independence between two tables and also between two tuples within a table. The request manager part of the book comprises 4 chapters. The parallel query optimization techniques with an algorithm for the XPRS shared memory parallel database system are presented, followed naturally by a novel approach to parallel join, namely the page connectivity information. Performance evaluation tools for parallel database systems are next. An example of the Software Testpilot on the performance assessment of Oracle V7.x on the Ncube with 64 nodes supports the analysis of performance evaluation. In the area of data management, two issues are addressed: load lacement and recovery. In parallel techniques, the load placement is a key issue and the analysis of a few load placement schemes is included. Recovery in a client-server database system is a very complex process and its success depends mainly on avoiding inconsistent data base state. The authors introduce a framework for recovery analysis based on ACTA formalism and compare the recovery requirements for three client-server systems: ESM CS, ARIES/CS and a Shared Nothing with Disks (CD). Three chapters are related to parallel machine architecture. Firstly, a chapter on parallel strategies for a petabyte multimedia database computer provides an analysis of the new concepts and technical challenges addressed by the new database paradigm. A Multimedia Data Warehouse concept is introduced with the application from a National Institute for Standard and Technology (NIST) medical knowledge bank program that uses the concepts and multimedia object/ relational database systems described in this chapter. The analysis of a petabyte system is based on the analysis of Teradata Multimedia Database System (Teradata MM), the Teradata Multimedia Object Server architecture and the Teradata Relational Database System with the use of parallel infrastructure and computer platforms like DBC/1012 and WorldMark 5100M. The work on a prototype self-organizing multiprocessor database machine, MEDUSA is presented next. MEDUSA is a parallel data server based on a shared nothing (SN) architecture and performance testing shows that MEDUSA is suited well as a research prototype or and a backend server to the currently available conventional processors. MEDUSA is aimed as an economic data server by the use of off the shelf INMOS Transputer components. The next chapter introduces the system software of the Super Database Computer SDC-II, which is a highly parallel database server aimed to provide processing of large scale and complex queries. SCD-II is realistically compared with commercial products and despite its limitations due to disk capacity; the benchmarks tests show that SDC-II is based on an efficient and promising approach. The last chapter is an analysis of Data Placement in parallel database systems. The layout of the data across the processors can have a significant impact on the performance of a parallel DBMS. Five different data placement strategies are considered. The authors present a study realized with STEADY (System Throughput Estimator for Advanced Database Systems). STEADY is an analytical tool for performance estimation of SN parallel DBMS's. The authors focus mainly on the study of data placement strategies at the processor level. Database technology is now expanding and the need to handle very large heterogeneous databases make parallelization the only valid approach for the next generation of DBMS's. The high potential of the parallel databases and the rapidly increasing sizes of databases require that both the vendors and users have a deep understanding of parallel database systems. The book is intended for the well-informed specialists in DBMS and for computer engineers who design parallel architectures able to support fast and frequent database transactions. The background and training required to read this book is at the level of graduate studies with practical experience and mostly knowledge of the commercial products, both on database management and in parallel architecture and networking. The reader finds a broad area of subjects discussed in the book in the area of parallel databases and it is a must-read for implementers and designers of modern large databases. Although a great reference, the book is not intended as a text for a course. For the less informed reader this book is not what is usually understood by a self-contained book mainly because of its level of technicality and the lack of low-level introductions on each subject presented. The results communicated are new and of good applicability in the near future approaches in the area of parallel database transactions and architecture design. Michelle Pal, Los Alamos National Laboratory Los Alamos, New Mexico
- Conference Instance
1
- 10.1145/1403375
- Mar 10, 2008
Welcome to the DATE 08 Conference Proceedings. DATE combines the world's favourite electronic systems design conference and Europe's leading international exhibition for electronic design, automation and test, from system level hardware and software implementation right down to integrated circuit design. The DATE 08 event features a technical program with 77 sessions covering the latest developments in system design and embedded software, IC design/test methodologies and EDA tools, together with an exhibition with the leading EDA, silicon and IP providers showing their new products and services. Challenges that you all face or soon will face in your daily practice are the increasing design complexity of highly integrated systems, the introduction of reconfigurability and embedded software, and the control of power and variability in nanometer IC designs. All these issues will be addressed in this year's DATE event. For the 11th successive year DATE has prepared an exciting technical programme, with the help of the more than 400 members of the Technical Programme Committee, who dedicated their time to thoroughly review the 839 submissions in 37 topics, ranging from system level down to circuit design and covering all the most relevant application domains. The submissions are organised in 4 major areas: D --- Design Methods, Tools, Algorithms and Languages A --- Application Design T --- Test Methods, Tools and Innovative Experiences E --- Embedded Software After a thorough review and selection process (with an average of 5 reviews per paper), finally 198 regular papers were selected for presentation at the conference. Additionally, there are 46 Interactive Presentations that are organised in 5 IP sessions. Together with the invited special sessions (panels, embedded tutorials and hot topic sessions), this has resulted in a high-quality technical program. The technical program provides a wide but high-quality coverage of design, design automation and test topics, from the system level to the integrated circuit level. Compared with previous years, submissions in the Embedded Software track has increased by 50%, showing a clear trend towards a comprehensive system design focus with integrated hardware and software solutions. DATE has established itself firmly as a true Electronic System Design Conference. This year the conference is held in Germany, at the ICM in Munich and spans an entire working week from Monday March 10 to Friday March 14. On Monday, eleven pre-conference tutorials will be given. The three-full day tutorials cover topics of great interest for system design. The first tutorial deals with techniques to automatically realising embedded system from high-level functional models. The second tutorial addresses the different issues related to communication based design and architectures in automotive electronic systems. The third tutorial discusses several key concepts on system-level design and application mapping for wireless and multimedia MPSoC architectures. Furthermore, eight half-day tutorials are also given, which cover a wide spectrum of topics on specification, modelling, design and test. The main conference opens on Tuesday March 11, with two very interesting and complementary keynote speeches. Dominique Vernay, Chief Technical Officer for Thales, will talk about the challenges of embedded systems design, and Giovanni de Micheli, Professor at EPFL, will present his views on designing micro/nano systems for a safer and healthier tomorrow. On the same day, the Executive Track offers a series of business panels with executives discussing hot topics in design: the perils of 45 nanometers, the changes in EDA strategies from IDM, to fablite to fabless, and embedded systems level design strategies. DATE 08 will again offer two specific days related to special themes. On Wednesday March 12, a special full-day track is devoted to Automotive Electronics---Software and Architecture. This special-day track will focus on the challenges faced by the automotive supply chain with particular attention to system and software architecture design. It provides a comprehensive analysis of the evolution of automotive architectures, including ECUs, sensors and communication standards and discusses how new methods, tools and standards for interoperability and component-based design can deal with the increasing complexity of software systems and their need for reliability and guaranteed timely behaviour. In addition, it addresses to what degree existing standards, including AUTOSAR and FLEXRAY and model-based development, can support the development of safety and time-critical software. On Thursday March 13, a second special full-day track focuses on Dependable Embedded Systems. This track will address both conceptual and applied issues for design, analysis and validation of dependable embedded systems. The utility of embedded systems and services is based, in large part, on our depending on their sustained functionality in spite of the encountered operational or malicious disruptions. As the number of transient and also permanent disruptions (given the decreasing device geometries, higher device density, lower voltage latching, faster clocks etc) is expected to increase substantially, this will not only be a key issue for the hardware community but also the systems community in general. Solutions using a combination of hardware and software might be more effective than hardware-only or software-only solutions. This track focuses on both conceptual and applied issues for design, analysis and validation of dependable embedded systems. Besides these special tracks, the main conference is organized in six parallel tracks of sessions, three devoted to design methods, tools, algorithms and languages, one to application design, one to test methods, tools and innovative experiences, and finally one to embedded software. The presentations of the selected regular papers in these parallel tracks are complemented by nine Special Sessions and two Invited Industrial Sessions. The special sessions are organized in the form of panels, hot topics, and embedded tutorials. The topics to be covered include quantitative evaluation for embedded systems design, design and manufacturing at 32 and 22nm, software for wireless networked embedded systems, test challenges for low power devices, quantitative productivity measurement in IC design, and 3D Integration. The two invited industrial sessions deal with industrial system designs in transportation and information technologies. To emphasise that DATE is the major event for the designers, DATE 08 features also invited sessions where Europe's famous consumer industry presents their best designs and design practices. Friday March 14 is the day for the DATE workshops. DATE 08 offers workshops on current and emerging important issues in design, test, EDA and software to complement the regular conference. They provide a unique opportunity for the various research and design communities to spend a day discussing the latest and the best, sharing their experiences and visions. This year's workshop program includes eight workshop themes. Four workshops are related to software engineering, ranging from dependable software to modelling, analysis and development tools. Four other workshop themes cover a variety of topics such as the impact of process variations on design and test, the merging world of Nano-Electro-Mechanical Systems, new directions in high level synthesis, and heterogeneous reconfigurable hardware. Each workshop features presentations, invited or submitted, from highly distinguished academic and industrial researchers. Finally, throughout the conference days, DATE offers a comprehensive overview of commercial design and verification tools in its large exhibition hall. Exhibitors include EDA vendors, silicon, FPGA and IP providers showing their new products and services. In addition, there is an Exhibition Theatre featuring talks from engineering managers of the leading electronic manufacturers on first-hand design experiences of commercial EDA tools. New this year is the European projects village where different European and large government funded projects will be able to show their ongoing research and results to the design community and have the opportunity for internal meetings and discussions. The DATE week will be also an opportunity for students and universities to show their research work, through the PhD Forum on Monday evening and the University Booth where hardware and software demonstrations will be shown by different universities on a rotation schedule. The DATE 08 event's program will be particularly attractive to industrial designers, at analog, IC, FPGA and embedded system level, as well as software designers, to researchers and academics as well as to design managers, and an increasing attendance is expected. We therefore invite you to take full advantage of the many opportunities offered to you by DATE 08, to extend your knowledge and/or business in electronic system's design and to exploit the abundant networking possibilities offered to socialise with colleagues, including fringe meetings and a memorable social party with bands and dancing. We hope that you will all enjoy the DATE 08 Conference and Exhibition.
- Research Article
- 10.5075/epfl-thesis-6644
- Jan 1, 2015
Nowadays, business and scientific applications accumulate data at an increasing pace. This growth of information has already started to outgrow the capabilities of database management systems (DBMS). In a typical DBMS usage scenario, the user should define a schema, load the data and tune the system for an expected workload before submitting any queries. Copying data into a database is a significant investment in terms of time and resources, and in many cases unnecessary or even no longer feasible in practice due to the explosive data growth. Additionally, the way DBMS store and organize data during data loading defines how data will be accessed for a given workload and thus, the maximum performance. Selecting the underlying data layout (row-store or column-store) is a critical first tuning decision which cannot change. Nevertheless, today query analysis is not static; it evolves as queries change. Hence, static design decisions can be suboptimal. In this thesis, we advocate in situ query processing as the principal way to manage data in a database. We reconsider the data loading phase and redesign traditional query processing architectures to work efficiently over raw data files to address the heavy initialization cost that comes with data loading. We present adaptive data loading as an alternative to traditional full a priori data loading. We explore the potential of in situ query processing in the context of current DBMS architectures. We identify performance bottlenecks specific for in situ processing and we introduce an adaptive indexing mechanism (positional map) that maintains positional information to provide efficient access to raw data files, together with a flexible caching structure and techniques for collecting statistics over raw data files. Moreover, we design a flexible query engine that is not built around a single storage layout but it can exploit different storage layouts and data execution strategies in a single engine. It decides during query processing, which design fits the input queries and properly adapts the underlying data storage. By applying code generation techniques, we dynamically generate access operators tailored for specific classes of queries. This thesis revises the traditional paradigm of loading, tuning and then querying by using in situ query processing as the principal way to minimize data-to-query time. We show that raw data files should not be considered ``outside'' the DBMS and full data loading should not be a requirement to exploit database technology. On the contrary, proper techniques specifically tailored to overcome limitations that come with accessing raw data files can eliminate the data loading overhead making, therefore, raw data files a first-class citizen, fully integrated with the query engine. The proposed roadmap can provide guidance on how to convert any traditional DBMS into an efficient in situ query engine.
- Conference Article
11
- 10.1145/191246.191265
- Jan 1, 1994
Twenty years of AI research in knowledge representation has produced frame knowledge representation systems (FRSs) that incorporate a number of important advances. However, FRSs lack two important capabilities that prevent them from scaling up to realistic applications: they cannot provide high-speed access to large knowledge bases (KBs), and they do not support shared, concurrent KB access by multiple users. Our research investigates the hypothesis that one can employ an existing database management system (DBMS) as a storage subsystem for an FRS, to provide high-speed access to large, shared KBs. We describe the design and implementation of a general storage system that incrementally loads referenced frames from a DBMS, and saves modified frames back to the DBMS, for two different FRSs: LOOM and THEO. We also present experimental results showing that the performance of our prototype storage subsystem exceeds that of flat files for simulated applications that reference or update up to one third of the frames from a large LOOM KB.
- Research Article
- 10.32520/stmsi.v11i2.1717
- May 21, 2022
- SISTEMASI
Entering the era of the industrial revolution 4.0, there is an increasing need for digital workers. However, based on available data, Indonesia's digital talent is still very lacking and there is a mismatch between the supply of labor and the needs of the industry which has led to an increase in the unemployment rate in Indonesia. This gap must be a concern of educational institutions, especially universities, to provide educational designs that are in accordance with industry needs. Systems analysis and design courses play an important role in the development of digital skills, especially for the System Analyst profession which is much needed at this time. Based on the various skills needed by the company for System Analysts, the relevance of the curriculum analysis and system design courses used in universities in Indonesia will be reviewed. This study uses content analysis methods to analyze information based on system analyst job advertisements and course lesson plans (RPS) for systems analysis and design courses. The data testing method is carried out using a correlation test which shows that there is no relationship between the analysis and system design courses with the needs of systems analyst skills in the industry today. With the Cartesian diagram, it is known that there are two skill categories that need to be prioritized for improvement in the learning plan for the system analysis and design course, namely testing (SIT and UAT) and basic programming.
- Book Chapter
- 10.4018/978-1-930708-44-0.ch018
- Jan 1, 2002
Interest in and attention to knowledge management have exploded recently. But integration of knowledge process design with information system design has long been missing from the corresponding literature and practice. The research described in this paper builds upon recent work focused on knowledge management and system design from three integrated perspectives: 1) reengineering process innovation, 2) expert systems knowledge acquisition and representation, and 3) information systems analysis and design. With this work, we now have an integrated framework for knowledge process and system design that covers the gamut of design considerations from the enterprise process in the large, through alternative classes of knowledge in the middle, and on to specific systems in the detail. We illustrate the use and utility of the approach through an extreme enterprise example addressing Navy carrier battle groups in operational theaters, which addresses many factors widely considered important in the knowledge management environment. Using this integrated methodology, the reader can see how to identify, select, compose and integrate the many component applications and technologies required for effective knowledge system and process design.
- Book Chapter
- 10.4018/9781930708440.ch018
- Jan 18, 2011
Interest in and attention to knowledge management have exploded recently. But integration of knowledge process design with information system design has long been missing from the corresponding literature and practice. The research described in this paper builds upon recent work focused on knowledge management and system design from three integrated perspectives: 1) reengineering process innovation, 2) expert systems knowledge acquisition and representation, and 3) information systems analysis and design. With this work, we now have an integrated framework for knowledge process and system design that covers the gamut of design considerations from the enterprise process in the large, through alternative classes of knowledge in the middle, and on to specific systems in the detail. We illustrate the use and utility of the approach through an extreme enterprise example addressing Navy carrier battle groups in operational theaters, which addresses many factors widely considered important in the knowledge management environment. Using this integrated methodology, the reader can see how to identify, select, compose and integrate the many component applications and technologies required for effective knowledge system and process design.
- Book Chapter
- 10.4018/9781930708440.ch018.ch000
- Jan 18, 2011
Interest in and attention to knowledge management have exploded recently. But integration of knowledge process design with information system design has long been missing from the corresponding literature and practice. The research described in this paper builds upon recent work focused on knowledge management and system design from three integrated perspectives: 1) reengineering process innovation, 2) expert systems knowledge acquisition and representation, and 3) information systems analysis and design. With this work, we now have an integrated framework for knowledge process and system design that covers the gamut of design considerations from the enterprise process in the large, through alternative classes of knowledge in the middle, and on to specific systems in the detail. We illustrate the use and utility of the approach through an extreme enterprise example addressing Navy carrier battle groups in operational theaters, which addresses many factors widely considered important in the knowledge management environment. Using this integrated methodology, the reader can see how to identify, select, compose and integrate the many component applications and technologies required for effective knowledge system and process design.
- Research Article
275
- 10.1145/83880.84529
- Sep 1, 1990
- Communications of the ACM
In software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the “waterfall” model (e.g., [4]). This model initially atte...
- Abstract
- 10.5210/ojphi.v11i1.9839
- May 30, 2019
- Online Journal of Public Health Informatics
Informatics & Surveillance in Global Health: Informatics Capacity for Zika Outbreak
- Research Article
7
- 10.5121/ijccsa.2012.2603
- Dec 31, 2012
- International Journal on Cloud Computing: Services and Architecture
The handling of unstructured data in database management system is very difficult.The managing unstructured data like image, video textual data etc. are not easy task in database system.In this work a concept of cloud algebra introduced to handle unstructured data in CDBMS.The most popular concept, relational algebra is used in relational database management system.The relational algebra is helpful in query processing in SQL.Another concept, Object Algebra which is used in object oriented database management system for query processing.The object algebra is useful to manage the object in object oriented database management system.Now the concept of Cloud computing is introduced in new the era of computer technology.The concept of cloud is also introduced in the field of databases.The cloud database management system is newly introduced in the field of database technology to manage the cloud data.This work introduces the concept of cloud algebra for handling unstructured data the cloud database management system.This work proposed the concept of cloud algebra for query processing for unstructured data in the cloud.The data are spread over the internet as cloud.The creation of new cloud of unstructured data and setting the relationship among various cloud of unstructured data are facilitated by cloud algebra.The updating, deleting and retrieval of unstructured data in cloud are done by cloud algebra.The cloud algebra provides powerful computation while using the query processing in CDBMS for handling unstructured data.
- Book Chapter
1
- 10.1007/978-4-431-68189-2_70
- Jan 1, 1992
Typical modeling techniques for information system analysis and design treat key system requirement parameters as static. In addition, system dynamics reflected in time-path behavior, such as queues and bottlenecks, are not captured in traditional information system process models. A more realistic approach to information system analysis and design, which would allow decision makers to make more informed choices on information system design alternatives, might be to include the dynamic aspects of a system and to model those components for which uncertainty exists in a probabilistic fashion. In this paper we propose a paradigm for integrating conventional process modeling in systems analysis and design with simulation modeling and analysis techniques. Simulation analysis enhances the modeling process by allowing systems analysts to experiment with and analyze alternative system designs. In addition, by including the distributional characteristics and, thus, the variability of key system parameters in the model, sensitivity analysis may be performed and the robustness of alternative system designs can be explored. Our proposed methodology for information system analysis and design is illustrated with an example of an order entry information system.
- Research Article
1
- 10.1002/cpe.1344
- Nov 26, 2008
- Concurrency and Computation: Practice and Experience
Application domains such as pharmaceutical discovery, combinatorial optimization or climate modeling demand performance levels well beyond the abilities of current high-performance uniprocessor architectures, for which parallel computing has long been seen as a key enabling technique. To meet this challenge, researchers in academia and industry have developed, numerous algorithmic and architectural solutions to take advantage of the natural performance benefits of the parallel computation paradigm. Parallel computing and parallel architectures are not only for high-performance computing. The relentless increase in very large-scale implementation (VLSI) capacities has enabled the migration of these architectures and thus computing paradigms to virtually any computing device, from the multi-core desktop computer to the embedded automobile control system. Regardless of the many structural and system-level differences between the parallel computing solutions proposed, developed and ultimately deployed in the various application domains, parallel computing relies on multiple processing units executing concurrently, cooperating and synchronizing in the pursuit of completing common tasks as quickly as possible. Despite its intuitive appeal, parallel computing is fraught with peril. The concurrent paradigm requires a fundamentally distinct programming mindset. To effectively exploit the computational abilities of parallel architectures, programmers must reason about the interactions between concurrent activities. Notoriously hard are the problems raised by the synchronization required to access shared resources, as programmers must be aware of the possibilities of load imbalance, livelock, starvation and even deadlock between the many executing tasks. The existence of heterogeneous resources, and in some domains real-time requirements, exacerbates the difficulty in programming parallel architectures. Researches in industry and academia have developed numerous programming models and languages to facilitate the programmability aspects in parallel computing, most notably allowing them to express concurrency and synchronization at various levels of granularity. Programmers also have at their disposal a wealth of tools such as compilers or performance analyzers, lowering their burden in the process of mapping computations to these parallel architectures. The continuing intense research in areas related to parallel computing and parallel architectures underscores the difficulty in the issues raised when programming these machines. The Compilers for Parallel Computers (CPC) workshop series has been devoted to addressing the many challenges in parallel computing while still recognizing the value of parallel computing basic techniques and application areas. The main goal of the workshop is to bring researchers in compilation and associated areas together in an informal setting and a relaxed atmosphere in order to exchange ideas and to foster collaboration, covering all areas of parallelism and optimization: from embedded systems, to large-scale parallel systems and computational grids. Among the many aspects covered in this years' edition of the workshop, we have focused in this special issue on three key areas of active research, namely programmer productivity techniques and tools, performance portability and robustness and lastly architecture-specific compilation techniques and analyses. In the category of articles devoted to programmer productivity, we have the first article by Wu et al., which describes an early experience in building and optimizing a complete system using Software Transactional Memory (STM). The authors analyze the performance of the complete compiler and run-time management systems for STM for three applications. They present very respectable average performance speedup using STM and identify the bottlenecks and opportunities for reduction of the TM overheads. The programmer intervention is minimal thus reducing the burden of this promising approach. The second article by Fraguela et al. addresses the balance between programmer productivity, maintainability and performance for the domain of numeric tiled stencil codes with overlapping shadow data regions. The authors describe a language data type, the hierarchical tiled array (HTA), which allows a programmer to easily express a wide range of algorithms. They discuss various implementation issues and present experimental results of the application of HTA on a small set of scientific code sequentially and parallelly. The results reveal the performance to be comparable to the performance attained by hand-optimized codes, but at a fraction of the programming time. Finally, the third article by Ronne et al. describes a combined static range analysis with efficient run-time checking to eliminate dynamic array bound checks in Java programs. The analysis results are used to derive linear constraints the code must comply with, which are then added to the mobile code as annotations. The authors present experiments for a publicly available benchmark set of codes demonstrating the effectiveness of this approach. In the category of articles devoted to performance portability and robustness we have the first article by Khan et al., which describes a template-based code specialization approach that leverages the results of static analysis to reduce the number of modifications performed for each selected template at run time. The authors present experimental results for non-trivial benchmarks ATLAS and FFTW, revealing that this approach is able to attain a good speedup with minimum increase in code size. The second article by Djoudi et al. describes a code specialization methodology for loops by composition of the specialized code versions for loops tailored for specific ranges of the number of iterations they execute. The generated code relies on loop peeling and prefetching transformations performance at the binary code level without interfering substantially with loop software pipelining. The authors present promising experimental results for a limited set of codes drawn from the SPEC benchmark suite. In the category of articles devoted to architecture-specific compilation techniques and analyses, we have the first article by Varbanescu et al., which evaluates possible mapping scenarios of a real multimedia analysis application on a heterogeneous multi-core processor, the Cell Broadband Engine (Cell/B.E.) architecture. The study focuses on exploiting task- and data-parallelism for high performance. Despite being an isolated study, the article provides insights into the utilization of both low-level and high-level optimizations, valuable to other researchers and engineers facing similar performance goals. Lastly, the article by Lu et al. presents a register allocation algorithm that includes global and local program knowledge for targeting a VLIW DSP processor with distributed register files whose port access is highly restricted. The experiments use a industry-strength compiler and, despite focusing on only a well-known set of small benchmarks, suggest the approach to be promising for emerging heterogeneous embedded architectures. These articles are a sample of the many interesting papers presented at the CPC 2007 workshop, held in Lisbon, Portugal, in July of 2007. We hope you find them interesting. We would also like to take this opportunity to thank our colleagues, Brian Carlstrom, Alain Darte, Evelyn Duesterwald, Geoffrey Fox, Robert Lucas, David Padua, Radu Rugina, Henk Sips and Byoungro So, who carefully reviewed the articles presented here, as well as the authors for their added effort in improving their articles. Finally, we would like to express our gratitude to John Wiley & Sons Ltd, our colleagues Prof. Geoffrey Fox (CC:PE Editor-In-Chief) for making this volume possible and our supportive editor, Rebecca Sleven, for her kind help. On a sadder note, during the time CPC 2007 was held in Lisbon, our friend and colleague, Peter Knijnenburg, passed away after a period of illness. We would like to take this opportunity to acknowledge his friendship and his tremendous inspiration as a leading researcher and as a human being. Prof. Michael O'Boyle and Prof. Henk Sips have contributed a special note in Peter's remembrance, which we include in this volume.
- Conference Article
1
- 10.1061/40946(248)31
- Oct 10, 2007
For many years, AISC has published the Seismic Provisions for Structural Buildings , as a companion specification to the main structural steel design specification. Specification for Structural Steel Buildings . The latest editions of these two documents were published in 2005 as ANSI/AISC 341-05 and ANSI/AISC 360-05, respectively. To assist designers in applying the Seismic Provisions, the AISC Seismic Design Manual is a first edition publication by AISC to provide applied design examples illustrating the application of the AISC Seismic Provisions and other Seismic Standards. The Manual, for the most part, focuses on the design of the lateral system for the same simple, regular rectangular bay frame with each of the major braced- and moment-frame lateral system types, in both R = 3 and high-seismic applications. Detailed design examples are provided to highlight special design and detailing requirements for these systems when designed to resist seismic demands. The manual examples are only presented using the LRFD design method; however it should be noted that ASD design is also permitted by the 2005 edition of the AISC Seismic Provisions. In addition to the detailed design examples, the Seismic Design Manual contains general guidance on the basics of seismic design forces and the application of ASCE 7 code requirements using the Equivalent Lateral Force Method. Tables have also been provided for the quick selection of seismically compact sections in various frame configurations, and for the selection of common design variables required for the application of the Seismic Provisions. The final portion of the document discusses the design of other elements, design concepts and systems that are not presented in detail in the first edition of the Manual. These items include the following: 1 Design of diaphragm chords and collectors 2 Discussion how to properly apply the term Maximum Force that can be Delivered by the System that is used in the Seismic Provisions 3 Design of Buckling Restrained Braced Frame (BRBF) systems 4 Design of Special Plate Shear Wall (SPSW) systems 5 Design of Special Truss Moment Frame (STMF) systems 6 Design of systems that included supplemental damping elements. The document has been developed to be consistent with ASCE 7-05 Minimum Design Loads for Buildings and Other Structures .
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.