Cook-Hauptman Associates, Inc. |
FUTURE TRENDS IN HARDWARE AND OPEN SYSTEMS
This paper addresses, for the 1988 time frame, trends in hardware
(i.e., workstations, local area networks, and computational servers) and open
systems. An engineering scenario and analysis is presented to expose fundamental
shifts in business values and orientations and to portray the ends to which the
trends are directed. Each topic is discussed in terms of its driving force,
contributing technologies, most critical choice, best selection, and
trends. |
Those responsible for computer aided design
(CAD) in the late eighties are faced with difficult decisions on many fronts:
technical, managerial, and organizational. The biggest challenges to realizing
the promises of CAD/CAM are managerial, but even so, the challenges in the
technical and organizational arenas are substantial. Furthermore, the classical
approach to decision making of dividing problems into separate parts is becoming
inappropriate as engineering decisions become more dependent on overall business
issues. Therefore, the first section of this paper, Engineering in the late
1980s, puts the CAD decisions in a larger business context. The remainder of
the paper is the presentation of CAD hardware and open system technical trends
within this business context. |
The engineer of the late 1980s begins the day by, say, reading the electronic
mail and perusing his or her project's electronic bulletin board. Although
the engineer has a powerful graphics workstation, that much computing power
isn't needed for these initial tasks, but at least they are performed using the
same user interface as heftier tasks and the same familiar collection of software
tool kit resources. On the surface, this engineer's day may look like
an orderly, straightforward transition from today's hodgepodge of memos, meetings,
scheduling, consultations, and computer hassles. The apparent ease with which these
designs progress resulted from a large investment in new means and methods for the
conduct of engineering. These new means are carefully selected and integrated systems
which complement a new set of managerial values and organizational orientations.
Table 1
Table 2 CAD hardware and open systems trends that will
be sustained are those which are complementary to these paradigm shifts. As an
example, the trend towards workstations is a significant facilitator of timeliness and
creativity in the pursuit of high margin designs. As a counterexample, the conspicuous
trend away from the "islands of automation" could be attributed not
only to the inherent lack of integratability, but also to the lack of timeliness
caused by manually iterating around the simulate - design - analyze loop.
Furthermore, as engineering design activities blend into an engineering design
process, the flexibility of components becomes more important than the
performance of components. |
Driving Force
The driving force in workstations is to give
individual engineers sufficient computing power and memory capacity to allow them to
create, simulate, and analyze their designs interactively for all but their most
comprehensive analyses. The computing power challenge to do this is to provide high
powered vector processing, image processing, and general purpose processing. The
memory challenge is simply to have sufficient memory for 90 to 95%" of all tasks at
affordable prices. This challenge translates into 1 to 10 million bytes
(megabytes) of resident semiconductor memory and 20 to 100 megabytes of rotating
magnetic memory for under $10,000. Contributing Technologies The technologies contributing to workstation
trends are: CMOS semiconductors, gate arrays, surface mounted chips, and magnetics.
CMOS semiconductors will reach 1-2 million circuits per chip resulting in 1 megabit
memory chips and 5 million instruction per second (mips) processor running at
24 million hertz (megahertz). Gate arrays will be the major technology for
implementing high speed customized logic. Surface mounted chips will
effectively double board capacity and allow all workstations to be desktop
consoles. Magnetic memories (not optical nor vertical magnetic) memories will be
preferred for rotating memory and will have twice the capacity at today's prices.
Workstation versus Personal Computer
Engineers will have one of two fairly distinct
choices for doing their CAD work. Either they will use a workstation or a personal
computer. Specific prices and specifications is highly speculative, but are offered
in the spirit of trying to be helpful. The engineering workstation will be a $20,000
compact desktop engineering workstation whose specifications might be: Alternatively, there will be a $5,000
general purpose personal computer whose specifications might be: Selection The choice should be determined by the nature
of the usage. Engineers whose primary usage is for management, conceptual design,
perusing designs, or elementary designing should select a personal computer,
especially since it will outperform many of today's workstations, and will cost a
tenth as much. However, engineers whose primary usage is in medium to large projects
and who regularly do detailed design or analysis should select an engineering
workstation. Businesses which take a superficial view of
return on investment and only provide personal computers to engineers who, by the
above, qualify for engineering workstations will usually be subjected to negative
consequences much larger than their savings. For example, skimping on engineers'
CAD tools results in designs which, in some instances, may not be competitive
because the slowness with which the personal computer responds adversely affects
the engineer's creativity or causes results to be late. Trends Consequently, the trends for engineering
workstations are: |
Driving Force The driving force in local area network (LAN)
communications is to have responsive, reliable, communication of information (almost
exclusively data through 1988, then voice and video sometime soon thereafter) to and
from engineers' workstations (and personal computers). The primary challenges to
communication are speed and reliability, followed by a long list of ancillary
considerations. The speed challenge is to keep every user on the network (maybe
somewhere between 100 and 1000) from ever experiencing any noticeable degradation
of responsiveness, even during peak usage. The reliability challenge for voice and
video is nominal (a fairly large number of errors is tolerable), however, for data,
the reliability challenge is to achieve error free transmission of data and, in the
rare occasions of an error, the sender and receiver must be notified. Finally,
there is a large number of ancillary considerations: distance, interference,
security, safety, ground currents, installation, splicing, corrosion, etc. These
ancillary considerations relate, generally, to the issue of fiber optics versus
coaxial cable or telephone wire. Contributing Technologies The major technologies contributing to LAN
trends are: standards, signal processing, and GaAs and low cost lasers. In most
respects, the rapid innovative standards progress of the last few years has the
same effect as an advancing technology. The International Standards Organization -
Open Systems Interconnect (ISO - OSI) communications standards reference model
provides a framework for separate physical media and protocols to cooperate in the
communication of data, especially through multiple LANs. Signal processing and
line conditioning techniques are achieving 1-2 megabit per second over ordinary
telephone lines accustomed to maximum data rates of 64 thousand bits per second.
Data compression techniques are achieving three times compression on data
transmission (and 10 times on voice and 30 times on video). GaAs (due to
its unique properties of light/electricity conversion and ultra high speed)
and laser (due to its unique ability to emit an exact frequency of light in short
bursts of high energy) technology provide the means by which to send ultra high
data rates (1-2 billion bits per second) over fiber optic cable for distances
measured in miles or kilometers. Fiber Optics versus Twisted Pair
Engineering departments will have two dramatic
alternatives for LAN cabling, fiber optics and twisted pairs. The cabling selection,
in turn, determines the ultimate capacity of the entire local area network. Or, the
top of the next column are the parameters of these choices. FIBER OPTICS
TWISTED PAIRS
Table 3 Selection
The selection is not as easy as the three
orders of magnitude difference in bandwidth and error rate and one order difference
in distance suggests. So prevailing is the aversion to re-cabling that twisted pairs
will probably dominate administrative office local area networks. However,
engineering organizations make extensive use of graphical data (rather than
textual data) whose size and usage will accrue over time and whose size will
multiply as precision, complexity, and pervasiveness increase. Therefore, to
have a responsive CAD system will require that the data communications
capacity be substantial in order to remain responsive. Coaxial cable (particularly IBM's 75 ohm and
to a much lesser extent Ethernet's 50 ohm), has as almost its only attraction being
based on a mature (almost commodity, i.e., consumer cable television) technology which
must be compared to fiber optics' more than ten fold bandwidth capacity and
more than 100 fold data integrity superiority. For that reason, coaxial cable
was not included as a choice. Trends The trends for local area networks are:
|
Driving Force The driving force is the thruput of applications.
An important distinction between servers can be made based on whether they run a
standard environment (e.g., Fortran 77, UNIX 4.2) without the need of any manual
reprogramming in order to get the bulk of their performance benefits. Those
which require no reprogramming are generally mainstream to the interests of
general engineering and CAD users. The measurement of thruput is highly
controversial because the (sometimes bizarre) architectures of computational
servers can get radically distorted results on any single measure or benchmark.
Nonetheless, the push for thruput is so strong that very proprietary
architectures (and the latest technologies) are used. Performance is generally
five to ten times that of an engineering workstation and (both are) advancing
one order ever 5 years. That puts computational servers in the 25 to 50 million
instructions per second (Mips) class. Contributing Technologies Most of the contributing technologies are
the same (except for magnetics) as for Workstations, discussed above, namely:
CMOS semiconductor, qate array logic, and surface mounted chips. The architecture
of the server is the major technology contributor to achieving higher thruput.
The architecture chosen for handling concurrancy, parallelism, switching,
instruction streams, data flows, memory caching, and bus traffic generally
determines the thruput power of the server. The commitment to vectors, arrays,
complex versus reduced instruction set instructions can also materially affect
how a computational server executes particular classes of jobs. Established Name versus Start-up
Unless you are doing research or have exceptional
needs for engineering computation, I believe the issue will boil down to the above.
The Established Name will usually have a large repertoire of running and
supported engineering analysis and simulation software. The Start-up
will probably have the latest cost/performance benefits, and may have a
special purpose niche in which it has spectacular performance. Selection Since the purpose is to have a computational
advantage, the Start-up that has minimized or obviated manual reprogramming will
usually offer the best price performance. However, the viability of these Start-ups
has to be raised as a central issue. Good indicators of business viability
are allegiances with major companies (not laboratories). A truly advanced
architecture offered by an established company may also be worthwhile. Trends Trends in Computational Servers: |
Driving Force The driving force is leading users (who are
usually large CAD system customers!) who insist on being able to repeatedly and
consistently exploit their product data throughout their entire product design
activities from simulation, analysis, documentation, and then release to manufacturing
without being restricted to any one vendor. These leading users have mandated that
their discrete product data handling activities (processing, sharing, and
dissemination) be fully automated into a continuous product process (information)
flow. This product process flow is to become the neural network (in the case of CAD)
and the nervous system (in the case of CAM) of the Factory of the Future and cannot
be done on a broad scale without Open Systems. By Open Systems, we mean systems
whose components abide by standards which allow users to mix and match vendors'
offerings according to need and preference. Contributing Technologies The major technologies contributing to
Open Systems are LANs and standards. The advances in LAN technology, including
the advances in the standards on which it relies, are discussed previously in
the section entitled, "LOCAL AREA NETWORKS." Standards have been
rapidly emerging, not only for LAN and other communications, but also for virtual
device interfaces and data base exchanges. The ISO - OSI communications model
and the GKS framework are advances in the specifications of standards for CAD/CAM
Open Systems. On the other hand, the competition between Europe and America has
probably held back Open System standards. European versus American Standards
In this ever shrinking world, parochial
orientations as "European versus Americana" is counterproductive. What is
needed is cooperation and consensus. In the case of \virtual device interface
standards, Europe's GKS (Graphical Kernal System) has matured to the point where it
is ready for truly international acceptance. Its conceptual framework and inherent
flexibility makes it superior to its American counterpart, Core and PHIGS (Programmers
Hierarchical Interactive Graphics Standard). The progress toward data base exchange
standards is discouraging. Neither =the American IGES (Initial Graphics Exchange
Specification) nor the European SET(Standard D'Exchange et De Transfert)
have an adequate conceptual framework. SET can be thought of as analogous to a
compiled IGES (i.e., it is more compact and faster, but derives from narrower
(aerospace) interests and centralized implementation. Selection In some areas the choice is easy. For example,
Fortran will continue having wide acceptance and binding to all relevant standards.
UNIX seems to be the de facto Operating Systems' standard. And, in
communications, many standards comfortably coexist due to the interchangability of
standards at each layer of the ISO - OSI model. Lastly, GKS has matured to the point
where it will experience wide acceptance. Trends The trends in Open Systems are: |
PRESENTED AT: CAMP '85 Conference on
September 25, 1985 in Berlin, Germany
|
https://cha4mot.com/works/cad_trnd.html
as of November 23, 1997 Copyright © 1985 by James E. Cook |
||||