Научная статья на тему 'Large-scale computer-aided design'

Large-scale computer-aided design Текст научной статьи по специальности «Строительство и архитектура»

CC BY
120
42
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по строительству и архитектуре, автор научной работы — Adeli Hojjat

The author and his associates have been 'working on creating novel design theories and computational models with two broad objectives: automation and optimization. This paper is a summary of the author's Keynote Lecture based on the research done by the author and his associates recently. Novel neurocomputing algorithms are presented for large-scale computer-aided design and optimization. This research demonstrates how a new level is achieved in design automation through the ingenious use and integration of a novel computational paradigm, mathematical optimization, and new high performance computer architecture.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Large-scale computer-aided design»

References

1. Shih 1991. The utilization of CD-ROM in the Academic library in Taiwan. Journal of library Science V28. No.3 331-344.

2. Martin 1990.CD-ROM network. Wilson-library-bulletin 64(5),83-84.

3. Ron J.R 1988. Creation and distribution of CD-ROM Database for the library reference Desk. Journal of American Society for information science 37-p62.

4. Finnegan 1990. Wiring in information to a College campus: a port for every pillom, online 14(2) p37-40.

5. C C Chen 1989. Beyond the online public access catalog: one step electric access to information, proceeding of the 2nd pacific conference on new information technology for library. 359-p364.

6. M A Grant 1989. The multiplotter CD-ROM network at Boston college, laserdisk professional 2(5), 12-18.

7. W.E.BURR.1986. The FDDI Optical Data Link. IEEE communication Magazine,Vol.215.18-23.

8. F.E.Ross,1986. FDDI-A tutorial. IEEE Commun magazine, vol 24,10-15

9. E.Cheng,1991. WAN remote operation structure. PC magazine Chinese education Vol 10^0.12,141-143.

10. FDDI-Token Ring Media Access Control ANSIX3T9.

1 l.C.S.Yu,1990. A protocol for Integrating Voice and data on FDDI/Ethemet interconnect networks. International telecommunication Symposium 469-476.

12.C.C wan,1991. WAN. remote operating magazine Chinese version, 141-143.

13.kJuly,1992. Networking CD-ROM via a LAN. CD-ROM professional, 83-90.

14. Maxwell, 1991. DOS-based LAVs group up. PC magazine, 167-235.

15.Perratore,1991. Networking CD-ROM: The power of shared access. PC magazine, 10,33-363.

16. Thompson, 1990. Networking CD-ROM. PC magazine9,237-260

YflK 658.512.2

Hojjat Adeli Large-scale computer-aided design

Abstract

The author and his associates have been 'working on creating novel design theories and computational models with two broad objectives: automation and optimization. This paper is a summary of the author’s Keynote Lecture based on the research done by the author and his associates recently. Novel neurocomputing algorithms are presented for large-scale computer-aided design and optimization. This research demonstrates how a new level is achieved in design automation through the ingenious use and integration of a novel computational paradigm, mathematical optimization, and new high performance computer architecture.

Most of the neural networks research has been done in the area of machine learning

[1]. Neural network computing can also be used for design optimization. Adeli and Park

[2] present a neural dynamics model for optimal design of structures by integrating penalty function method, Lyapunov stability theorem, Kuhn-Tucker conditions, and the neural dynamics concept. A pseudo-objective function in the form of a Lyapunov energy functional is defined using the exterior penalty function method. The Lyapunov stability theorem guarantees that solutions of the corresponding dynamic system (trajectories) for

arbitrarily given starting points approach an equilibrium point without increasing the value of the objective function. In other words, the neural dynamics model for design optimization problems guarantees global convergence and robustness. But, this does not guarantee the equilibrium point is a local minimum. We use the Kuhn-Tucker condition to verify that the equilibrium point satisfies the necessary conditions for a local minimum.

The neural dynamics model was first applied to a linear optimization problem, the minimum weight plastic design of lowrise planar steel frames [3]. In this application, nonlinear code-specified constraints were not used. It was shown that the neural dynamics model yields stable results no matter how the starting point is selected.

Recently, Adeli and Park [4,5] extended the model for optimization of large structures consisting of several thousands members. In order to achieve automated optimum design of realistic structures subjected to actual constraints of commonly-used design codes such as the American Institute of Steel Construction (AISC) Allowable Stress Design (ASD) and Load and Resistance Factor Design (LRFD) specifications [6,7], they developed a hybrid counterpropagation neural (CPN) network-neural dynamics model for discrete optimization of structures consisting of commercially available sections such as the wide-flange (W) shapes used in steel structures.

There are four stages in the hybrid neural dynamics model:

- mapping the continuous design variables to commercially available discrete sections using a trained CPN network [8- 11 ],

- generating the element stiffness matrices in the local coordinates, transforming them to the global coordinates, and solving the resulting simultaneous linear equations using the preconditioned conjugate gradient (PCG) method,

- evaluation of the constraints based on the AISC ASD or LRFD specifications, and

computation of the improved design variables from the nonlinear neural dynamics model.

The design variables are the cross-sectional areas of the members. For integrated design of steel structures, a database of cross-section properties is needed for computation of element stiffness matrices and evaluation of the AISC ASD and LRFD constraints. A counter propagation network consisting of competition and interpolation layers is used to learn the relationship between the cross-sectional area of a standard wide-flange (W) shape and other properties such as its radii of gyration [11]. In other words, the trained CPN network is used as an electronic cross-sectional properties database manager.

The recalling process in the CPN network is done in two steps. In the first step, for each design variable a competition is created among the nodes in the competition layer for selection of the winning node. The weights of the links between the variable and competition layers represent the set of cross-sectional areas of the available standard shapes. In the second step, discrete cross-sectional properties encoded in the form of weights of links between the competition- and the interpolation layers are recalled. The weights of the links connecting the winning nodes to the nodes in the interpolation layer are the cross-sectional properties corresponding to an improved design variable.

In the second stage of the neural dynamics model for optimal design of structures, sets of linear simultaneous equations need to be solved in order to find the nodal displacements. Direct methods require the assembly of the structure stiffness matrix which can be very large for a structure with thousands of members. As such, they are not appropriate for distributed memory computers because of their large memory requirements. Consequently, iterative methods are deemed more appropriate for distributed memory computers where the size of the memory is limited, for example, to 8MB in the case of CM-5 system used in this research. As a result, a data parallel PCG method is developed in this research.

The third stage consists of constraint evaluation using the nodal displacements and member stresses obtained in the previous stage. Three types of constraints are considered:

fabricational, displacement, and stress (including buckling) constraints. For the LRFD code, the primary stress constraint for a general beam-column member is a highly nonlinear and implicit function of design variables.

In the final stage, the nonlinear neural dynamics model acts as an optimizer to produce improved design variables from initial design variables. It consists of a variable layer and a number of constraint layers equal to the number of different loading conditions.

Optimization of large structures with thousands of members subjected to actual constraints of commonly-used codes requires an inordinate amount of computer processing time and can be done only on multiprocessor supercomputers [12,13]. A high degree of parallelism can be exploited in neural computing model [1]. Consequently, Park and Adeli [14] created a data parallel neural dynamics model for discrete optimization of large steel structures and implemented the algorithm on a distributed memory multiprocessor, the massively parallel Connection Machine CM-S system.

In the data parallel neural dynamic model, parallelism is exploited at the four stages of the neural dynamics model. TTie model has been implemented on a CM-5 system. The main components of the CM-S system are a number of processing nodes (PN), partition managers (PM), and two high-speed, high-bandwidth communication networks called data and control networks. A PN has four vector units (VU) with 8 MB of memory per unit and can perform high-speed vector arithmetic computations with a theoretical peak performance of 128 MFLOPS. A VU is the smallest computational element (processing element, PE) in the system which executes vector operations on data in its own local (parallel) memory. In the CM-S system used in this research a PM manages up to 2048 distributed PEs. To achieve high performance for the data parallel neural dynamics model, all the data are processed as parallel arrays across the VUs such that the elements that need to be operated together reside in the local parallel memory of the same VU. Efficient load balancing is ensured by distributing the array elements among the available VUs evenly.

The neural dynamics algorithm developed in this research has been applied to large highrise and superhighrise building structures of arbitrary size and configuration, including a 144-story steel superhighrise building structure with a height of S26.7 m (1728 ft). This structure is a modified tube-in-tube system consisting of a space moment-resisting frame with cross bracings on the exterior of the structure. The structure has 8463 joints, 20096 members, and an aspect ratio of 7.2. The structure is subjected to dead, live, and multiple wind loading conditions applied in three different directions according to the widely-used Uniform Building Code [IS].

An attractive characteristic of the neural dynamics model is its robustness and stability. We have studied the convergence histories using various starting points. We noted the model is insensitive to the selection of the initial design. This is specially noteworthy because we applied the model to optimization of large space frame structures subjected to actual design constraints of the AISC ASD and LRFD codes. In particular, the constraints of the LRFD code are complicated and highly nonlinear and implicit functions of design variables. Further, the LRFD code requires the consideration of the nonlinear second order effects.

The largest structural optimization problem ever solved and reported in the literature is a 100-story highrise structure with 6136 members and variables (no design linking strategy was used) [16]. That structure was a space truss structure and not subjected to any code-specified constraints. The largest example presented in this presentation is a space frame structure and is by far the largest structure optimized according to the AISC ASD and LRFD codes ever reported in this literature.

This research demonstrates how a new level is achieved in design automation through the ingenious use and integration of a novel computational paradigm, mathematical optimization, and new high performance computer architecture.

Acknowledgment

This research is sponsored by National Science Foundation under Grant No. MSS-9222114, American Iron and Steel Institute, and American Institute of Steel Construction. Computing resources were provided by the Ohio Supercomputer Center and the National Center for Supercomputing Applications at the University of Illinois at U rbana-Champaign.

References

1. Adeli, H. and Hung, S.L., Machine Learning Neural Networks, Genetic Algorithms, and Fuzzy Systems, John Wiley and Sons, New York, 1995.

2. Adeli, H. and Park, H.S„ "A Neural Dynamics Model for Structural Optimization -Theory", Computers and Structures, 1995, in press.

3. Park, H.S. and Adeli, H., "A Neural Dynamics Model for Structural Optimization - Application to Plastic Design of Structures", Computers and Structures, 1995, in press.

4. Adeli, H. and Park, H.S., "Optimization of Space Structures by Neural Dynamics", Neural Networks, Vol. 8, No. 6,1995.

5. Adeli, H. and ?vk, H.S., "Hybrid CPN-Neural Dynamics Model for Discrete Optimization of Steel Structures", 1996, to be published.

6. AISC, Manual of Steel Construction, Allowable Stress Design, American Institute of Steel Construction, Chicago, 1989.

7. AISC, Manual of Steel Construction, Load and Resistance Factor Design, Vol. I, Structural Members, Specifications, and Codes, American Institute of Steel Construction, Chicago, 1994.

8. Hecht-Nielsen, R. "Counteipropagation Networks", Proceedings of the IEEE 1st International Conference on Neural Networks, Vol. 11, IEEE Press, New York, 1987, pp. 19-32.

9. Hecht-Nielsen, R., "Counterpropagation Networks", Applied Optics, Vol. 26, No. 23,1987, pp. 4979-4985.

10. Hecht-Nielsen, R„ "Application of Counterpropagation Networks", Neural Networks, Vol. I, No. 2,1988, pp. 131-139.

11. Adeli, H. and Park, H.S. , "Counterpropagation Neural Networks in Structural

Engineering", Journal of Structural Engineering, ASCE, Vol. 121, No. 8,1995, pp. 1205-1212.

12. Adeli, H., Ed., Parallel Processing in Computational Mechanics, Marcel Dekker, New York, 1992.

13. Adeli, H., Ed., Supercomputing in Engineering Analysis, Marcel Dekker, New York, 1992.

14. Park, H.S. and Adeli, H., "Data Parallel Neural Dynamics Model for Integrated Design of Steel Structures", 1996, to be published.

15. Uniform Building Code, Vol. 2 - Structural Engineering Design Provisions, International Conference of Building Officials, Whittier, CA, 1994.

16.Soegiarso, R. and Adeli, H., "Impact of Vectorization on Large-Scale Structural Optimization", Structural Optimization, Vol. 7,1994, pp. 117-125.

i Надоели баннеры? Вы всегда можете отключить рекламу.