Algorithm Design for Networked Information Technology Systems
Sumit Ghosh - Springer
Foreword
I
felt deeply honored when Professor Sumit Ghosh asked me to write the
foreword to his book with an extraordinary perspective. I have long
admired him, first as a student leader at Stanford, where he initiated
the first IEEE
Computer
Society’s student chapter, and later as an esteemed and inspiring
friend whose transdisciplinary research broadened and enhanced the
horizons of practitioners of computer science and engineering, including
my own. His ideas, which are derived from his profound vision, deep
critical thinking, and personal intuition, reach from information
technology to bioscience, as ex- Hibited in this excellent book. To me,
an ordinary engineer, it opens up a panoramic view of the Universe of
Knowledge that keeps expanding and in- Spiring, like the good Indian
proverb, which says, “a good book informs you, an excellent book teaches
you, and a great book changes you. “ I sincerely believe that Professor
Ghosh’s book will help us change and advance the methods of systems
engineering and technology
Vision
Inspired
vision sees ahead of others what will or may come to be, a vivid,
imagined concept or anticipation. An inspired vision personifies what is
good and what like- Minded individuals hope for. Our vision is one of
creating an
Internet
of minds, where minds are Web sites or knowledge centers, which create,
store, and radiate knowledge through interaction with other minds
connected by a universal shared network
This
vision will not just hasten the death of distance, but will also in-
Carcerate ignorance. We note that the rapid growth of many disciplines
in the sciences, humanities, technology, and business, has been promoted
by the convergence of ideas through knowledge transfer across
traditional bound- Aries. This is what a distinguished business leader
and entrepreneur, Professor
George
Kozmetsky, calls the transdisciplinary movement in education. Pro-
Fessor Ghosh is a great mentor, who teaches by this principle. In some
sense, knowledge transfer across traditional disciplinary boundaries is
the first phase in the Internet of the Mind paradigm
Preliminary Issue
A
critical and difficult to resolve issue is how to fine tune educational
pro- Cesses and maximize the flow of educational information from the
source—the transmitter (knowledge bases, mentors, books, teachers, etc.)
—to the needy individual, the student at the receiving end. These
processes must ensure that the recipient obtains the needed education in
the most effective way at the most appropriate time and place. Two
issues are paramount here: The first is how to make knowledge “fluid”; the
second is how to personalize educational processes to fit the
intellectual infrastructure of the recipient. The concept of fluidity is
similar to the Keynesian concept of liquidity of capital, which makes
knowledge easily storable, transferable, and transportable across
heterogeneous intellectual boundaries. The second issue will be
addressed later
Knowledge and Technology: Their Growth and Implications
We
shall now tread on the unknown and unexplored area of knowledge and
technology growth. We are fully aware of their exponential growth rates,
but unfortunately, there are no precise methods or rigorous approaches
to measure them. A Harvard Business School study many years ago found
that “knowl- Edge doubles every seven years. “ More recently, former U.
S. President Bill
Clinton
and Vice President Al Gore have indicated in their speeches that
knowledge doubles every three years, and they refer to U. S. Department
of Education studies
The
technology growth rate can be surmised from the progress of the high-
Tech semiconductor and communication industries. The famous Moore’s law
states that semiconductor chip performance doubles every eighteen
months; while the price stays the same. Thus, we shall assume that the
knowledge growth rate based on “expert” evaluation is about 30% per
year; likewise, the technology growth rate based on Moore’s law is about
66% per year. In a later section, we shall try to answer an apparent
anomaly: Why is the growth rate of technology (which is derived from
knowledge) Much higher than the growth rate of knowledge itself?
Knowledge Obsolescence Rates
Bill
Joy, the chief scientist of Sun Microsystems, has surmised in his
featured interviews and articles that professional obsolescence creeps
at the rate of 20% per year. In other words, after five years, we will be
strangers in our profes- Sional disciplines, no better than college
freshmen. One may quibble about the magnitude of the rate, but the trend
appears to have some validity. This obsolescence figure may be more
applicable to mature professionals beyond the age of 40 who have not
actively kept up with technological advances than to younger ones
Technology Growth, Knowledge Growth, and Moore’s Law
Technology
is derived from scientific knowledge, but Moore’s law tells us that
technology grows much faster than knowledge. We shall offer a subtle ex-
Planation. Knowledge is a result of rational and physical discoveries.
These processes can be slow and incremental, but they rarely leapfrog.
Knowledge and its discoveries are like dots on the radar screen in an
air traffic control center: New dots appearing, old dots always moving and
changing. Technology is derived from connecting and linking the dots
and creating useful patterns that can be made from them. It is like
knitting a beautiful mosaic that con- Nects and provides meaning
(satisfies human needs) To the patterns. Given N dots (e. G. , knowledge
discoveries or inventions), the number of patterns they create is an
exponential power of N. This heuristically explains Moore’s law and the
more rapid growth of high- Tech industry. The dots and patterns are
always moving and changing, and this also is a prominent characteristic
of technology. Technology always advances faster than knowledge. The
former subsidizes its own growth and that of science through the
creation of auto- Mated tools and computer- Aided adjuncts. This, in
turn, fuels the accelerated growth of technology, ` a la Otto Hahn’s
chain reactions and the Livermore loops
The Challenge
Our
real challenge: how to develop a feasible and realizable approach to
reduc- Ing the nonlinearly growing chasm between the advancement of
technology and the increase in the knowledge content of any individual.
In theory, the chasm cannot be completely and conclusively bridged.
Although this raises many unanswered—and perhaps unanswerable—questions,
several approaches that can slow down the overtake and obsolescence are
conceivable. A key approach is compression of learning time by
innovative teaching methods, which un- Derscores the need to personalize
educational processes to fit the intellectual infrastructure of the
recipient. In this book, Professor Ghosh proposes inter- Disciplinary
approach as the fundamental source of new teaching techniques
This
transdisciplinary approach is fluid- Like and flexible, allowing one to
eas- Ily penetrate a number of different disciplines at once. The
objective of a proper match between the knowledge generators and
knowledge recipients is analogous to the issue of impedance matching in
electrical engineering, which ensures an efficient transfer of electric
power from the power- Generating sta- Tions to the power- Consuming
towns. The objective is also critical because it holds the key to
opening a path for every willing individual to continue to be an active
and valuable participant in society, despite the march of time. We are
now living longer and healthier lives and working for many more years,
and the idea of mandatory retirement has clearly fallen from grace. This
then propels us to envision an environment and an educational framework
in which a new knowledge infrastructure may be established in each
individual to help him or her remain matched with the advancing
technology. It then becomes the purpose of the environment and
educational framework to provide caring and personalized mentors to help
in the assimilation of new technologies; cast- Ing them in the molds of
individuals’ infrastructures for ultrafast absorption
Even
as each of us ages with time, the promise that we can continue to keep
pace with exciting new technologies and remain valuable and contributing
members of society as long as we wish is a shining beacon of hope
The Three Visions and Their Manifestations in This Book
Through
many decades of my association with academic research and teach- Ing, I
have come to a firm belief that the interplay of three types of vision
is critical for progress. These include (1) Eagle vision, (2) Frog
vision, and (3) fish vision. The eagle soars high, has a commanding view
of the entire ground in every direction, and, when necessary, is able to
bring its eyes to a pinpoint focus on a scurrying rat, miles below. In
this book, the eagle- Eye vision con- Stitutes the overview, a survey, a
perception about the integrated whole, from the past, the present, and
the future. Frog vision refers to microscopic vision, or an up- Close
view of an object that reveals an extraordinary level of detail
Here,
the frog- Eye vision is manifested in the details of the algorithms,
meth- Ods, and concepts that the book articulates so well. The fish views
the entire world outside the water, all 180 degrees, compressed through
a narrow cone of 98 degrees. However, its vision can detect fast
changes because its eyes’ fields of vision do not overlap; and the fish
cannot afford to shut its eyes even for a moment. The fish- Eye vision
assumes the form of an emphasis on the dynamics of change and movement
and the inevitable progress and advances that the book will produce
This
book is not just for specialized scientists and engineers; its essence
is easily accessible to the nonspecialists who will find their curiosity
aroused, their need to learn stimulated, and their minds changed
Dr. C. V. Ramamoorthy, Professor Emeritus
EECS Department, University of California, Berkeley April 3,2003
Contents
Foreword
Prefacexi 1 Introduction
1.1 What Are Networked Information Technology Systems?
1.2 What Are Asynchronous, Distributed Decision- Making Algorithms?
1.3 The Current Focus on Synchronous, Distributed Algorithms, andWhy
1.4 Why Are Most Natural and Physical Processes Asynchronous? 6
1.5 The Importance of ADDM Algorithm- Controlled NIT Systems 7
1.6 Review of the Distributed Systems Literature 2 The Nature, Characteristics, and Implications of ADDM Algorithms
2.1 Nature of ADDM Algorithms
2.2 Fundamental Characteristics of ADDM Algorithms
2.2.1 Entities in ADDM Algorithms
2.2.2 Asynchronous Nature of the Entities
2.2.3 Concurrency in the Entities
2.2.4 Communication between the Entities
2.2.5 Proof of Correctness of ADDM Algorithms
2.2.6 Robustness of ADDM Algorithms
2.2.7 Performance of ADDM Algorithms
2.2.8 Stability of ADDM Algorithms
2.3
Existence and Synthesis of ADDM Algorithms for Real- World Problems 3
Case Studies: Synthesis of ADDM Algorithms for Real- World Systems
3.1 Algorithm for Accurate Execution of VHDL Descriptions
3.1.1 Introduction: Review of the Literature
3.1.2 The P2EDAS Approach
3.1.2.1 The Role of the Event Prediction Network in P2EDAS
3.1.2.2 Synthesis of the Event Prediction Network
3.1.2.3 The P2EDAS Algorithm
3.1.3 An Example to Illustrate P2EDAS
3.1.4 Implementation and Performance Analysis of P2EDAS 59
3.1.4.1 Performance Results for Shared Memory Multiprocessor Implementation
3.1.4.2 Performance Results for Loosely Coupled Parallel Processor Implementation
3.1.4.2.1 DLX Computer Architecture
3.1.4.2.2 Random State Generator
3.1.4.3 Performance Analysis
3.2 Decentralized Algorithm for Railway Networks with Soft Reservation
3.2.1 Introduction
3.2.2 The RYNSORD Approach
3.2.3 Modeling RYNSORD on an Accurate, Realistic, Parallel Processing Testbed
3.2.4 Implementation Issues
3.2.5 Simulation Data and Performance Analysis
3.3 A Novel Architecture for Real- Time Payment Processing
3.3.1 Introduction
3.3.2 The NOVAHID Approach
3.3.2.1 The NOVAHID Algorithm
3.3.2.1.1 Execution of International Transactions
3.3.2.2 Modeling NOVAHID on a Loosely Coupled Parallel Processor System
3.3.3 Modeling in NOVAHID
3.3.4 Implementation and Performance of NOVAHID
3.3.4.1 CPU Time
3.3.4.2 Total Time Required for International Transactions
3.3.5 Limitations of NOVAHID
3.4 Decentralized Decision- Making in Military Command and Control
3.4.1 Introduction: History and Structure of Military Command and Control
3.4.2 A Novel Approach to Decentralized Command and Control
3.4.2.1 The Asynchronous, Decentralized Algorithm
3.4.2.2 Modeling the Decentralized C3 Algorithm on an Accurate, Parallel Processing Testbed
3.4.3 Implementation Issues
3.4.4 Performance Analysis
3.4.4.1 Scenario 1
3.4.4.2 Scenario 2
3.4.4.3 Observations and Limitations
3.5 Distributed Circuit- Partitioning- Based Algorithm for Fault Simulation
3.5.1 Introduction
3.5.2 An Asynchronous, Distributed Algorithm for Fault Simulation
3.5.2.1 Distributed Fault Simulation of Combinational Subcircuits
3.5.2.2 Asynchronous, Distributed Fault Simulation of Sequential Subcircuits
3.5.2.2.1 Intuitive Overview
3.5.2.2.2 The NODIFS Algorithm for Sequential Subcircuits
3.5.3 Implementation Issues
3.5.4 Performance of NODIFS
3.6 Distributed Approach to Real- Time Payment Processing
3.6.1 Introduction
3.6.2 A Centralized, Uniprocessor- Based Algorithm for Real- Time Payment Processing
3.6.3 A Novel, Distributed Approach to Payment Processing.202
3.6.3.1 Execution of Events in the Pseudo- Transaction Algorithm
3.6.3.2 A Model of the Distributed Banking Network.205
3.6.3.3 A Distributed Architecture for the Simulation.206
3.6.3.3.1 Initialization from Network Configuration Files
3.6.3.3.1.1 Software Operating Characteristics of the Node
3.6.3.3.1.2 Simulation Characteristics of the Node
3.6.3.3.1.3 Characteristics of the Current Network
3.6.3.3.2 Connection Mechanism at Initialization
3.6.3.3.3 Assertion of Transaction Packets into the Simulation
3.6.3.3.4 The Notion of Time in the Simulation209
3.6.4 Architecture of the Banking Simulation Program
3.6.4.1 The Node Program
3.6.4.1.1 Buffering and Processing of Input Packets
3.6.4.1.2 Switching Packets through the Node 212
3.6.4.1.3 Transmission and Buffering of Outgoing Packets
3.6.4.2 Limitations of the Proposed Approach
3.6.5 Implementation Issues
3.6.6 Performance of Banking Network Simulation
3.6.6.1 Experimental Banking Networks
3.6.6.2 Results
3.7 Hierarchical, Distributed, Dynamic Inventory Management
3.7.1 Introduction
3.7.2 Review of the Traditional Inventory Management Model229
3.7.3 The HDDI Approach
3.7.4 Simulation Results and Performance Analysis of HDDI.237
3.7.4.1 Overview of HDDI’s Performance Behavior
3.7.4.2 Performance Evaluation of HDDI Relative to Its Parameters for the Inventory Management Network
3.7.4.2.1 The Parameter r c, Ratio of Emergency to Regular Reorder Cost 244
3.7.4.2.2 The Parameter r p, Ratio of Regular to Emergency Order Size
3.7.4.2.3 The Parameter h b, Ratio of Holding to Backorder Cost
3.7.4.2.4 The Parameter r l, Ratio of Normal to Emergency Lead Time
3.7.4.2.5 The Parameter d p, Daily Profit
3.7.4.2.6 Performance Comparison of Four Scenarios
3.7.4.3 Performance Analysis for Different Inventory Management Networks 4 Debugging Network- Centric Systems
4.1 The Difficulty in Debugging NIT Systems
4.1.1 Principal Reasons Underlying Debugging Difficulties
4.1.2 A Representative Experience in Debugging NIT Systems259
4.2 Visual Aids in Debugging: DIVIDE
4.2.1 Introduction
4.2.2 Visual Display of Parameters
4.2.2.1 Issues in User Interface Design
4.2.2.1.1 Movement Controls
4.2.3 Issues in the Design of the Graphical Display
4.2.3.1 Network Display
4.2.3.2 Buffer Graphs
4.2.3.3 Issues in Layout Design of the Graphical Images of the Concurrent Processes
4.2.3.4 Dynamic Scaling
4.2.3.5 Distributing the Visual Images of the Concurrent Processes
4.2.4 Implementation Details
4.2.5 Performance of DIVIDE 5 Proofs of Correctness of ADDM Algorithms
5.1 The Need for Proofs of Correctness of ADDM Algorithms
5.2 The Nature of the Proofs of Correctness of ADDM Algorithms 276
5.3 Proof of Correctness of the NODIFS Algorithm
5.3.1 Combinational Subcircuits
5.3.2 Sequential Subcircuits
5.3.2.1 Termination of Fault Simulation
5.4 Proof of Correctness of P2EDAS
5.4.1 Execution of Events in the Correct Order: Monotonicity in Wh Values
5.4.2 Proof of Freedom from Deadlock
5.4.3 Termination of Simulation 6 A Mathematical Framework for Synthesizing ADDM Algorithms
6.1 Review of the Current Literature
6.2 A Military Command, Control, and Communication Problem.290
6.2.1 Component Vectors of the Entities
6.2.1.1 Movement State Vector
6.2.1.2 Sensor State Vector
6.2.1.3 Target State Vector
6.2.2 State Vectors of the Entities
6.2.2.1 Self State Vector
6.2.2.2 Friend State Vector
6.2.2.3 Enemy State Vector
6.2.3 Complete Internal State Vector of an Entity
6.2.4 Decision Vectors of Entities
6.2.4.1 Movement Decision
6.2.4.2 Sensor Management
6.2.4.3 Target Management
6.3 Modeling the C3 Problem under the Centralized Paradigm in MFAD
8.4.3 Perturbations to the Input Rate and Stability Analysis.341
8.4.4 Perturbations to System Characteristics and Stability * Analysis
8.4.4.1 Perturbations to Interstation and
Train- To- Station Communication
8.4.4.2 Perturbations Relative to the Track Segments.346
8.4.5 Summary of Stability Analysis of RYNSORD 9 The Need and Art of Interdisciplinary Thinking
9.1 The Notion of Computational Intelligence
9.2 Reflection as a Catalyst in Triggering Creativity
9.2.1 Introduction
9.2.2 Reflection as a Catalyst in Triggering Creativity
9.2.2.1 Scientific Discoveries and Engineering Inventions
9.2.2.2 Creative Flashes in Ordinary Activities
9.2.2.3 Nature’s Use of Reflection
9.2.3 Using Reflection to Trigger Creativity at Will
10 The Future of ADDM Algorithms
References
(Index About the Author)
1. John McLaughlin. Face It, E- ZPass Is Just One Big Multimillion- Dollar Jinx Sunday Star Ledger, New Jersey,21 July 2002
2. Lawrence Lessig. The Future of Ideas: The Fate of the Commons in a Con- Nected World. Random House Inc. , New York,2001
3. Harold W. Lawson. Rebirth of the Computer Industry. Communications of the ACM,45 (6): 25–29, June 2002
4.
Private communications with Dr. Al Aho, Vice President of Research,
Bell Labs, Lucent Technologies. Stevens Institute of Technology,
Hoboken, New Jersey 07030, December 2001
5.
Private communication with Prof. Gottfried Luderer, Emeritus ISS
Chaired Professor and formerly Department Head at Bell Laboratories.
Electrical Engi- Neering Department, Arizona State University, Tempe,
Arizona 85287, January 2002 and 15 August 2002
6. Sumit Ghosh. Hardware Description Languages: Concepts and Principles IEEE Press, Piscataway, New Jersey,1999
7. Franklin M. Harold. The Way of the Cell: Molecules, Organisms, and the Order of Life. Oxford University Press, New York,2001
8.
Private communications with Dr. Frank Fernandez, Former Director of
Defense Advanced Research Projects Agency, Arlington, Virginia. Stevens
Institute of Technology, Hoboken, New Jersey, February 2001
9.
Private communications with Raj Ghaman, U. S. Department of Transporta-
Tion, Washington, D. C. Arizona State University, Tempe, August 2000
10.
Sumit Ghosh and Tony Lee. Intelligent Transportation Systems: New
Princi- Ples and Architectures. CRC Press, Boca Raton, FL,2000
11. Noppanunt Utamaphethai and Sumit Ghosh. DICAF, A High Performance,
Distributed,
Scalable Architecture for IVHS Utilizing a Continuous Function—
“Congestion Measure. “ IEEE Computer,31 (3): 78–84, March 1998
Nhận xét
Đăng nhận xét