Abstract

Mead and Conway wrote in their pioneering book [1]. “Many LSI chips such as microprocessors, now consist of multiple complex subsystems, and thus are really integrated systems rather than integrated circuits.”. Gone are the days when design of the integrated circuits (IC's) were under the sole purview of the electrical engineers or more so of the solid state device physicists. Then the computer system designers were primarily responsible for designing their computer system based on the standard chips available in the market. But now the computer architects are involved in the chip design process right from the beginning. The close interaction between the engineers and computer scientists has resulted in increased automation of the whole design and fabrication process. This in turn has led to substantial reduction in cost and the turn-around time for the IC chips.It is predicted that by late 1980's it will be possible to fabricate chips containing millions of transistors. The devices and interconnections in such very large scale integrated (VLSI) systems will have linear dimensions smaller than the wavelength of visible light [1]. These advances in technology have produced tremendous impact in the area of computer architecture. The long standing semantic gap between the computer software and hardware now seems to be narrowing down. Several innovative ideas have developed and they have been implemented in VLSI. Some examples of those are Reduced Instruction Set Computer (RISC) [2], systolic arrays [3] and CHIP computer [4], etc. Ultimately, the circuits for these systems will encompass an entire wafer. These super chips will then be called wafer scale integrated (WSI) systems. At present, a wafer ranging from 2 to 8 inches in diameter in size can hold the equivalent of 25 to 100 microprocessors, such as Intel 8086. The same wafer can also hold 4 to 20 megabits of dynamic memory if an 0.8-micrometer Complementary MOS (CMOS) is used.There exists a large difference between the time taken for communication inside the chip and communication across the chip. Much of the time in a general computer system is wasted in data movement between various modules. The performance of a computer system can be tremendously improved if as many components as possible can be placed inside a single chip. VLSI offers this possibility to the system designers. Simple and regular interconnections lead to cheap implementations, high densities and good performance. The algorithms that have simple and regular data flows are particularly suitable for VLSI implementation. Some examples of such algorithms are Matrix Manipulations, Fast Fourier Transform (FFT), etc. Moreover, the concepts of pipelining and parallelism can be effectively employed to improve the overall execution speed. Systolic arrays [3] are a vivid example of a special purpose high performance system that exploit these opportunities offered by VLSI.In this session, we have three carefully refereed papers that exemplify the impact of VLSI on computer architecture. The first one by Kondo et al. describes an SIMD cellular array processor called Adaptive Array Processor (AAP-1). The system was designed and developed by the authors at the Nippon Telegraph and Telephone public corporation of Japan. The AAP-1 consists of a 256×256 array of bit organized processing elements that is built using 1024 custom n-mos LSI's. Extensive parallelism offers ultra high throughput for various types of two dimensional data processing. The processing speed of AAP-1 is shown to exceed that of a 1 MIPS sequential computer by a factor of approximately 100 for certain applications. The second paper by Hurson and Shirazi is on the design and performance issues of a special purpose hardware recognizer capable of performing pattern matching and text retrieval operations. Because of the similarities between the scanning process during compilation and the pattern matching operations, the proposed module can be used as a hardware scanner. The VLSI design, and the space and time complexities of the proposed organization are discussed. The third paper by P.L. Mills describes a design of bit-parallel systolic system for matrix-vector and matrix-matrix multiplication. All the circuits, described here, can be extended to any specification of word length size. The required modifications of the circuits for two's complement operation are also outlined in this paper.I truly appreciate the efforts of the authors without whom this session would not have been possible. I also thank all other authors who had submitted papers in this area. I am indebted to the referees who spent a considerable amount of their time in selecting the papers for this session. Finally, my sincere thanks are due to Terry M. Walker and Wayne D. Dominick for giving me the opportunity to chair this session on Advanced Computer Architectures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.