ARCHITECTURE THEME

TASK FORCES IN COMPUTER ARCHITECTURE

HICSS recognizes that there are a number of important topics in Computer Architecture that could benefit from a focused attention to the issues, leading to (hopefully) consensus as to the direction that work in that topic area ought to take. Task Forces are now being organized to deal with a number of topics listed below. Each Task Force will meet during the conference to attempt to understand the issues and reach consensus on what the future direction of the topic area should be. Each Task Force will produce a white paper detailing their recommendations. The set of white papers is expected to be published as a special section of a leading professional magazine. If you wish to participate in one of the Task Forces identified below, please contact the relevant Task Force Co-ordinator. If you feel a particularly relevant topic area has been omitted, and you have an uncontrollable desire to organize a Task Force to deal with it, please contact Yale Patt (patt@eecs.umich.edu) who is co-ordinating the entire set of Architecture Task Forces.

COORDINATOR

Yale N. Patt
The University of Michigan

Task Force on the Interaction of Architecture and Compilation Technology for High-Performance Single-Processor Design

Pen-Chung Yew
University of Minnesota
yew@cs.umn.edu

David Lilija
University of Minnesota
lilija@ee.umn.edu

The past several decades have seen dramatic growth in computer performance due to a combination of improving device technologies, architectural innovations, and developments in compilation technology. Assuming that advances in device technologies can be easily integrated into new processor designs, computer designers have focused their efforts on architectural and organizational innovations and enhancing the capabilities of compilers. While a computer's "architecture" can be defined as the line that divides the operations that are implemented in hardware from those that are implemented in software, this line has become increasingly blurred and variable as some processor designs shift significant complexity from the hardware into the compiler, while others have all but ignored the optimization capabilities of the compiler. As a result, there is significant debate in the architecture community about how much compiler support is reasonable and necessary to obtain the highest possible performance in a single processor that exploits instruction-level parallelism.

For instance, the very long instruction word (VLIW) architectures expose the inner structure of the processor to the compiler, thereby allowing the processor to exploit the sophisticated program analysis that can be performed by the compiler. Advocates of these architectures assume that the analyses that can be performed by a compiler with interprocedural analysis and profiling information can be much more extensive than what can be done in hardware at run-time. On the other hand, designers of superscalar processors that perform all dependence checking dynamically at run-time contend that the incomplete information available at compile-time forces the compiler to make conservative assumptions that limit the amount of parallelism that can be exploited. Consequently, the compiler is limited in what it can provide to enhance the processor's performance. Finally, there are those somewhere in the middle who feel that the combination of a processor with (perhaps limited) dynamic dependence checking and a sophisticated optimizing compiler is truly more than the sum of its parts.

What is clear is that there are several potential avenues that could be explored to continue the performance improvements that users expect in future microprocessor-based systems. By gathering together experts in processor design, compilation technology, and the integration of the two, we hope to stimulate debate about the interaction of the compiler with the processor architecture. This debate should identify the state-of-the-art in both compiler technology and computer microarchitecture, and should lead to a definition of the parameters of the design problem and, ultimately, to an identification of the research directions that are most likely to produce the greatest results.

Specific issues to be discussed may include, but are not limited to:

How should we integrate compile-time information with run-time information?

How do optimizing and parallelizing compilers drive architectural decisions?

How do architectural innovations drive the need for new compilation techniques?

How should architectures and compilers support speculative execution?

Is compiler-assisted run-time dependence checking a solution to the problems of restricted run-time look-ahead windows and incomplete compile-time information?

Is deferred compilation a viable strategy?

What compiler and hardware support is needed by concurrent multithreading architectures?

Are we beginning to see a merging of processor design philosophies? Is this necessary and useful, or is it counter-productive?

Task force on Compiling for Processors with Instruction-Level Parallelism

Michael Schlansker
Hewlett-Packard Labs
schlansk@hplmss.hpl.hp.com

As processor implementations incorporate more instruction-level parallelism, compiler technology continues to play a larger role in exploiting available parallelism. The purpose of this task force is to identify compiler research areas which are critical toward the problem of compiling for processors with instruction-level parallelism. Research areas should pass two tests: first, they should have the potential to provide substantiantial impact on the performance, efficiency, or usability of ILP processors; and second, they should describe problems which require substantial research.

The task force will develop a position statement which describing key research deficiencies and research successes in the ILP compiler area. Research deficiencies represent areas where further research can provide a substantial contribution to future products. Research areas which have been very successful may be so mature that there is little room for further improvement and thus, little room for substantial impact on future products. This position statement should help to identify areas with greatest need for further research.

Each task force member will write a position paper presenting their opinions regarding ILP compiler research - research which is relatively complete and research which remains to be done. Positions statements will be distributed prior to the HICSS meeting. The goal of the task force is to reach a consensus regarding the content of a white paper on "challenges in ILP compilation". The white paper will be written and assembled after the meeting.

This task force represents an opportunity for joint industrial and academic collaboration. Industry has the opportunity to solicit future research contributions from academia in areas of importance to their products. Academia has the opportunity to gain a better understanding of industry's long term needs for technical contribution in the ILP compiler area. The workshop also provides an opportunity for compiler writers and computer architects to jointly discuss research needed to compile for ILP processors.

In order to stimulate industrial participation, it should be understood that detailed discussion of planned products will not lead to fruitful discussion within the task force. Most industrial contributors cannot discuss details of unreleased products. Rather, the focus of the workshop is on longer term issues which are pertinent to a broad range of products which exploit ILP. In this forum, it is hoped that we can have genuine industrial participation.

Architectural Trends for Shared-Memory Mulitiprocessors

Per Stenstrom
Chalmers University, Sweden
pers@ce.chalmers.se

Emerging applications as well as software and hardware technologies have always been a driving force for architectural developments and shared-memory multiprocessors are not an exception. Because multiprocessors take advantage of mainstream hardware and software technologies, they provide a particularly cost-effective solution to high-performance computing. While research into this, what seem to be a very promising technology, has received considerable attention, machines are today mainly used as servers to increase throughput and only in rare cases to speed up individual applications. We all know that the main reason is that it is not sufficiently easy to program these machines.

The purpose of this task force is to identify the research directions in architecture, compiler, and performance debugging/tuning support to make these machines more attractive to use.

A sample of issues to discuss

People from academia and industry that work in any of the areas above (applications, compilers, performance-tuning tools, architecture) are welcome to participate. You will be expected to submit a position statement that focuses on any of the issues above.

Wireless Networking - Reaching the Extra Mile

Hasan Al-Khatib
Santa Clara University
halkhati@scupdc2.scu.edu

What are the necessary requirements for the development of local and metropolitan area wireless networks that provide access to the Internet infra-structure?

Configurable Computing Systems

Bill Mangione-Smith
University of California at Los Angeles
billms@icsl.ucla.edu

Configurable computing systems combine high density FPGAs along with processors to achieve the best of both worlds: customized digital circuits accelerators which are responsive to dynamic events. A number of different models have been proposed for configurable computing, including off-board data pumps, co-processors, configurable function units on the processor datapath and configurable datapaths. Each approach provides a different set of strengths and weaknesses, along with a different model of computation. The task force on Configurable Computing will be discussing the critical impediments which are currently limiting the use of these systems: computing models (architectural abstractions), runtime support for optimization and reconfiguration, driving applications and FPGA technology (density, configuration time and clock rates). Further information is available from Bill Mangione-Smith (billms@ucla.edu) or at http://www.icsl.ucla.edu/~billms/hicss97.

Combining General-Purpose and Multimedia in One Package: Challenges and Opportunities

Tom Conte
NC State University
conte@eos.ncsu.edu

Andrew Wolfe
Princeton University
awolfe@ee.princeton.edu

Applications in graphics, video compression, image and audio signal processing has dictated a design philosophy for multimedia processors (MMPs) that is significantly different from general-purpose processors (GPPs). Recent industry extensions for multimedia are attempts to merge MMPs and GPPs, with the ostensible goal of cost advantage.

This raises some interesting questions, including:

Will multimedia extensions succeed in supplanting special-purpose hardware, or will they instead occupy a lower-performance niche? Is it possible to offer the highest MMP performance in the same package with the highest GPP performance?

Are new MMP designs sacrificing too much for GPP features, especially in sub-thousand dollar systems?

Is what we are seeing a fundamental shift in user workloads. If so, should new designs emphasize multimedia over general purpose features?

Given the current instruction set extensions (such as MMX, VIS, and MAX-2) what additional extensions should be included, and at what additional cost?

Are there added side benefits of multimedia extensions for GPP workloads (e.g., that take advantage of the SIMD nature of minivector/packed-vector operations)? What extra ISA semantics would be helpful for these workloads?

What compiler support and enabling technologies need to be developed for the new, hybrid MMP/GPP processors?

Task Force on Network Storage Architecture

Garth A. Gibson
Carnegie Mellon University
garth@cs.cmu.edu

Storage systems represent a vital market that is growing faster than the personal computer market. Its primary constituents are magnetic and optical disk drives, magnetic tapes, and large-capacity (robotic) assemblies of drives and cartridges. Storage hardware sales in 1995 topped $40 billion, including more than 60,000 terabytes of hard disk storage. In recent years, the amount of storage sold has been almost doubling each year; in the near future it is expected to sustain an annual growth of about 60 percent. This enormous growth rate has been accompanied by a 35-50 percent per year decrease in the cost per byte of storage. Consequentially, insuring the continued vitality of storage's architecture in future computing systems is essential.

The goal of this task force is to chart out this interaction between local area network and storage architecture. Our primary tasks will be to:

Participants in the Task Force on Network Storage Architecture (see http://www.pdl.cs.cmu.edu/NASD/HICSS.html for overview and positions) may also be interested in the related Network-Attached Storage Devices industrial working group within the National Storage Industry Consortium (NSIC). The mission of this working group is to develop, explore, validate and document the technologies required to enable the deployment and adoption of network-attached storage devices and systems. Participation in this working group takes two forms: 1) attendence of public forums on network storage architecture sponsored by the working group and/or 2) commitment to an information and intellectual property sharing, self-funded research collaboration. For additional information of this working group, visit http://www.hpl.hp.com/SSP/NASD or E-mail our reflector, nasd@cello.hpl.hp.com.

The National Storage Industry Consortium membership consists of about sixty corporations, universities and national labs with common interests in the field of digital information storage. Corporate membership includes most major U.S. storage product manufacturers and many other companies from the storage industry infrastructure. NSIC has its headquarters in San Diego and was incorporated in April 1991 as a non-profit mutual benefit corporation. For additional information on NSIC, visit http://www.nsic.org/ or E-mail nsic@nsic.org.