vdpateluvpce
Monday, October 7, 2013
Friday, April 2, 2010
Control and path prediction of an Automate Guided Vehicle.
Control and path prediction of an Automate Guided Vehicle.
PREPARED BY: VIJAY PATEL 09CAD12
1.Introduction.
Automate Guided Vehicles (AGV) has been applied for the flexible manufacturing system. Many factories were adopted it into assembly line or production line such as automobile, food processing, wood working, and other factories. Many researchers developed and designed in order to suite with their applications which are related to the main problem of factory. Automate Guided Vehicle (AGV) has firstly developed and conducted the research by [17, 18, 19] in the attempt to using at Jumbo Truck Manufacturing in Thailand. On the past of developed AGV, we surveyed several papers concerned the design and control aspects as following. The different structures were proposed in several cases as
[1] proposed the architecture of AGV with two wheels
driven by differential gear drive and parallel linkage steering, and the design and operation was also presented by
[2]. This paper stated that the track layout and the number of AGVs in transportation control on a job-shop and a flow-shop were
determined by using the queuing network theory. For entire FMS system,
[3] proposed the operation control method by using two AGVs system. They solved the problem in scheduling method of AGVs model based on Petri nets. The formulation and heuristic search were used by global search in order to seek the optimal operation of the entire FMS. The operations of AGVs choice of guided path selection problem in FMS system was proposed by
[4]. They proposed an approach for material flow modelling basedon mathematical optimization method. With this approach, they obtained the guide path layout design with wire guided vehicles.
The objective of optimization model is the minimization of the total distance travelled by vehicles to transport the material handing system. The route planning of AGVs in FMS was proposed by
[5] presented the new approach for dynamics route
planning and scheduling problem of AGVs. They applied the search algorithm and some heuristic rules to solve the route assignment in dynamic situations.
[6] also proposed the path planning strategy of AGV to navigation, collision avoidance and
docking to the target. The path planning was implemented onboard computer in order to avoid the wire-guided path. Not only the AGV was moved along the path with collision avoidance, but also it should be navigated with no deadlock condition as done by
[7]. The AGV control approach was the important part for
controlling the AGV actions.
[8]. They formulated the control algorithm by digraph method in
real-time path assignment to the vehicles. The deadlock control of
AGV was controlled by colored resource-oriented Petri net model
method to deal with the conflict free in real-time control as shown
[9] applied the variable structure system techniques. The AGV was modeled by using kinematics
and dynamic system. Sliding mode control by using Lyapunov
design was applied for eliminating the chattering. They only
implemented by simulation methods. The other paper proposed
the control of AGV by using fuzzy logic control as shown in
[10] The AGV was guided by photoelectric guide way. The
designed controller was the self-adjustment of control parameter
by fuzzy controller.
[11] proposed the steering control of AGV using fuzzy control. The AGV was guided by guide tape. They showed the response and energy saving in case of step change of guide tape. Fuzzy controller was achieved the reduction of steering energy more than the PI controller.
[12] was presented the tracking algorithm of AGV navigation in container terminal.The multiple model algorithm based on multiple sensor detection
in order to detect obstacle or other AGVs. Unscented Kalman
filter was used to localization of AGV. They verified the propose
algorithm by simulation methods. The adaptive control of AGV is
also proposed by
[13]. The nonlinear of dynamic model wasdeveloped for motion generation. The propose control was based on Lyapunov concept to ensure the control of AGV even if the dynamic parameter was not perfect. The intelligent of AGV was also worked on several methods. The integrate sensor and vision
was applied for control AGV.
[14] studied the intelligent path following and control for vision-based automated guided vehicle.They presented the control path following of AGV by vision control system, and multi-sensors was also applied in real time
steering control. The hough transform algorithm was applied to
detect the guideline of path as shown by
[15]. The guideline of path was recognized by optical sensor as proposed by
[16]. the array of optical sensor with 14 infrared (IR) emitter-detector pairs arranged in two columns. The trajectory recognition was based on neural networks.
Figure-0 Mini-AGV, 60x15 cm
2. System architecture.
2.1. AGV design.
It is a three wheels vehicle as shown in Fig.1. The front wheel is used for driving and steering the AGV and the two rear wheels are free. The steering and driving are DC motor. Two encoders are individually attached on the two rear wheels in order.
to measure the vehicle displacement and then calculate its real time position and orientation. The choice of positioning the encoders on the free wheels provides to the vehicle an accurate
measurement of its progression. A programmable logic control
(PLC) is used for motion control.
2.2. Control structure.
AGV( automated guide vehicle) working are three structure, here three structure are define.
1) PLC control structure.
2) Camera control structure.
3) Sensor control structure.
2.2.1 Plc Control structure.
The parameters of the motion are driving speed and steering
angle which determine the evolution of the position and
orientation of the AGV. The input and output signal are interfaced
with PLC module. The inputs are the encoder signal from left and
right rear wheels. The driving speed and steering angle are
calculated form these inputs and the digital output is converted to
analog signal to drive amplifier of the driving motor and steering
motor on front wheel as shown in Fig. 2
A). INTEGRATED PLC CONTROL WITH WIRELESS CAMERA OF AGV SYSTEM.
i) SYSTEM ARCHITECURE DESCRIPTION.
Figure-3
The AGV prototype design is based on existing JUMBO industrial truck as shown Figure 3 . It is a three wheels vehicle as shown in Figure 1. The front wheel is used for driving and steering the AGV and the two rear wheels are free. The steering and driving are DC motor. Two encoders are individually attached on the two rear wheels inorder to measure the vehicle displacement and then calculate its real time position and orientation. The choice of
positioning the encoders on the free wheels provides to the vehicle an accurate measurement of its progression. A
programmable logic control (PLC) is used for motion control. The parameters of the motion are driving speed and steering angle which determine the evolution of the position and orientation of the AGV. The input and output signal are interfaced with PLC module. The inputs are the encoder signal from left and right rear wheels. The driving speed and steering angle are calculated form these inputs and the digital output is converted to analog signal to drive amplifier of the driving motor and steering motor on front wheel as shown in Figure 2. Figure 4 shown c
ommunication system. Modbus protocol is selected for communication structure between PC in operator site and PLC on AGV remote site of operation. Mater-slave parameters are obtained the configuration of communication protocol on PC in remote operation through RS-232 standard port. Mater will send command query to slave and the salve will answer back to the master then the communication will takes place.
Figure-4 AGV Wireless Communication.
ii) CONTROL DESIGN SYSTEM.
The deviation error being evaluated, the steering and driving command signal can be calculated and converted to analog signal by the PLC. The steering and driving control strategy are showed by the to simple block diagram figure 5.The correction applied to the command signal is a proportional one for the driving signal and proportional derivative for the steering signal.
The control algorithm of the AGV has been implemented by using PLC TSX micro form SCHNEIDER. The implemented program is written by PL7 Pro using Grafcet and structured text language. The main inputs of the PLC are high speed up and down counter connected to the 2 encoders. The outputs of steering and driving command are converted to analog output ranged by 0-5 V. The grafcet loop executes 3 consecutive tasks. Control loop is executed every 5 ms.
iii) INTERGRATED VISION SYSTEM.
Two modes of operation are developed with AGV control function: automatic and manual modes of operation. In this section, we describe the manual control operation with wireless camera equipped in front of AGV by fixed-point position in pose of look-ahead structure of visual control structure as illustrated in Figure 6 a). AV-receiver module uses for send and receive the audio video signal transmitted in radio frequency range with maximum distance 100 meters as illustrated in Figure 6 b). Human can view the environment and control AGV during its movement in manual
mode of operation.
iv) SIMULATION AND EXPERIMENTS.
In control design system, DC motors are applied for driving system. There are two axes of AGV control system. One is
for driving axis and the other is for steering axis. Positioning control of AGV is needed for control displacement and steering angle according to path of generation command. In this work, we design the control structure with Mathlab by using PD controller as depicted in Figure 7, since control gains are obtained as Kp equals to 27.5 and Kd is equals to 5.5 with response time 1.8 second with no steady-state error as shown control the performance in Figure 8.
v) Experiment Results.
Command window illustrated in Figure 9 uses for control AGV movement operation through PC in long distance area.There are two types of command such as positioning command (x,yposition) and jog mode command which is tested motion control in each axis such as go, turn left, turn right etc. In positioning command, set of x,y coordinated position are sent to AGV with the design path of movement. Experiments are conducted in several tests, for example, Figure 10 is shown the design path with S-curve shape of AGV moving on PC. The position pairs are sent to AGV with wireless communication channel. AGV receives the command of two axes, steering and driving axes, to control AGV with regarding to specified path. The result of AGV moving with the specified path have is shown in Figure 11.
2.2.2. Camera Control structure.
In this article we will discuss a machine vision system that uses one on-board camera to guide AGV to pick up a pallet. Furthermore, we will
explain some of the challenges that need to be taken into account in a project like this.
Figure 12. camera for AGV.
Figure 13.
Closed loop control.
In order to guide AGV to its target a closed loop control system is needed, as seen in Figure 13. The control system consists of a sensor, a controller,and an actuator, which changes the position of the
machine.In this case the sensor contains a machine vision
application that detects the position of the pallet and returns the position in real world units to the controller. Controller calculates the steering
commands for the actuator, which mechanicallymoves AGV closer to the pallet and finally picks it up.This article will concentrate to the sensor and the technical challenges related to its implementation.
The sensor.
Sensor’s task is to detect the pallet from the environment without any artificial cues. The sensor consists of a camera, which is looking forward
from AGV and a computer that processes the images from the camera. This processing consists of a pattern recognition algorithm.There are at least two ways to do the pattern recognition: gray value based correlation and edge based matching. Edge based matching is used in this application since it is more robust against illumination variations.
Edge based matching.
In order to use edge based matching, the shape to be matched has to be well known. In this case the shape is one side of the pallet, as seen in Figure 14.Furthermore, the assumed size of the pallet on the image needs to be known or otherwise the search willconsume too much time and AGV cannot be controlled in real-time. This approach assumes the visible shape has sharp edges, which typically is the case with the pallets.
Figure 13. Figure 3. Shape of a pallet found with edge based matching.
Edge based matching problems.
It is hard to find a shape from an image that has a lot of false
edges. These kinds of false edges are for example shade edges created by sunlight. In addition, some of the pallets contain rounded edges, which cannot be found from the image easily. The false edges are a major problem in this kind of application since the pallets are in naturally illuminated environment, or the environment itself contains a lot of other edges. One way to fight this problem is to use additional information to help the pattern recognition algorithm. In this application depth information is used.
Shape separation with motion vectors.
Motion vector based shape separation is based onoptical flow algorithm, which calculates the motion vectors from consecutive video frames. The motion vectors illustrate the movement in the image i.e. where each part of the image has moved in the consecutive frames. (See Figure 14.) This method can be applied to separate objects that are on different distances from the camera. A requirement is that the objects stay still and the camera moves.
Figure 14.
In order to calculate the motion vectors precisely all the surfaces should have a texture. In addition the reflections that appear to move with the camera should be minimized. The calculation itself is resource intensive so a fast calculation platform is needed to make use of this method in real-time.
Image processing.
i). Vision for Automatic guide Vehicles.
Automatic vehicles (AGV) are used to perform routine tasks for industry,as well as being used in areas hazardous to humans. Machine vision can provide such vehicles with 'sight', allowing them to understand their surroundings and leading to more flexible use of AGV. Stereo vision enables AGVs to have a 3-dimensional (3D) understanding of their environment (Fig 14.1). This can be used to perform free space mapping (FSM). FSM allows an AGV to find clear paths between obstacles.
ii) Detecting Obstructions
Figure 15. Example of how to apply image processing to detect obstructions
Firstly the ground plane (GP) is identified by fitting a plane through objects lying in it, e.g. floor markings. Edge detection identifies features in the AGV field of vision (Fig 14.2).Edges that are not in the GP are assumed to belong to objects that extend to the GP, and thus would obstruct the AGV movements (Fig 14.3). This approach enables 3D objects to be distinguished from features like floor markings. Three-dimensional scene edges, derived using stereovision, can also be used for vehicle navigation, and for the location and tracking of known objects (Fig 14.4).
ii) Exploring New Surroundings .
Figure 15. Super-imposing a Cartesian grid in Automatic Guided Vehicle .
autonomous guided vehicle (AGV) enters an unknown area, it must be
able to understand its surroundings, as it proceeds. One approach to achieving this, involves analysing the images received from a camera placed at the front of the AGV. Firstly, features such as corners (T-junctions or Y-junctions) are located as the AGV moves through the scene. Such features are chosen, because the point of intersection of the lines which form a corner is fixed regardless of the angle from which it is viewed. The features identified are then tracked in the series of
images. As the AGV moves, the apparent motion of features in its field of view will depend on their distance. In fact from the trajectories of features in the image, their 3D positions, and the 3D motion of the AGV, can be estimated. By superimposing a Cartesian grid on the image (see Fig 15), a drive able region can be defined.
iii). AGV Surveillance.
Figure16. View of Monitor in AGV Surveillance
AGV have mechanisms to locate their position relative to their
environment. Correct calculation of the AGV position is essential for safe and effective performance of tasks. It is therefore necessary to monitor the movements of an AGV and correlate this information with the AGV own estimate of its position. One such system uses four fixed cameras to survey a workspace. The image from each camera is used to identify objects by subtracting the received image from a reference image of the empty workspace. The positions of the cameras are calibrated so that the positions of the objects on the floor can be determined from their positions in the image. The data from the four cameras is fused to achieve precise location. Objects, such as people, can be distinguished from an AGV by using models which describe characteristic features.
2.2.3 Sensor control structure.
Figure17
A key requirement for autonomous navigation in an unconstrained and uncertain environment is that the systembe capable of sensing the surroundings to determine where the AGV is at present (localization) and where it is moving. This is in order for the AGV to be able to respond intelligently to a changing situation or environment. As is the case with many autonomous navigation systems, all the sensors are placed on board the vehicle. Sensory modules on board the actual vehicle include optical encoders, rate sensors, accelerometers, gyroscope, compass, DGPS, CCD color camera, laser scanner, sonar and proximity sensors. The sensors provide complementary information as regards the internal state of the vehicle and the current state of the
environment. Therefore, a proper sensor fusion algorithm can be developed in order to find localization and obstacle information. The vision system is used for the purposes of localization and local navigation. For the purpose of outdoor navigation, two novel lane detection algorithms have been developed. In one approach, the lane edges are detected directly on camera image and then those edges are converted into real world coordinates (see Figure 18). Catmull-Rom spline-based lane model and a free-form snake-based (FFSB) algorithm [7], which describes the perspective effect of parallel lines, has
been developed. In the other approach, it is assumed that the lanes are on a flat ground and first; the camera image is projected on to the ground image and then detection of the edges on the ground image. In this algorithm, deformable templates are used and circular arc is applied to describe the lane shape with a limited range. Differential Global Positioning Systems (DGPS) are gaining widespread popularity as a navigational aid, especially in providing absolute position information. We developed a position estimation system by utilizing information from different sensors, viz. DGPS, rate gyroscope and odometers
via extended Kalman filtering technique.
Figure 18:
Navigation Module.
The navigation module is implemented using a behavioural approach. That is, each of the complex local navigational tasks that need to be carried out is analyzed in terms of primitive behaviors and expressed as an aggregation of such behaviors. A fuzzy logic approach to behavior synthesis and integration has been adopted. The fuzzy behavioral methodology provides a natural means of incorporating human navigation skills in terms of linguistic information. The fuzzy behaviors which are considered to be necessary include, Wall-Curb Following, Obstacles Avoidance, Obstacles Contouring, Narrow Path Maneuver,
Cornering, Route Following and Wandering. Each of the behavior is synthesized based on appropriately fused sensory data received from the complementary sensor devices. The traveling profile of the AGV performing a left curb following and right turn at a junction is as shown in Figure 19
Figure-19
A novel behavior fusion method for the navigation of AGV in unknown environments also investigated. The proposed navigator consists of an Obstacle Avoider (OA), a Goal Seeker (GS) and a Navigation Supervisor (NS). The fuzzy actions inferred by the OA and the GS are weighted by the NS using the local and global environmental information and fused through fuzzy set operation to produce a command action, from which the final crisp action is determined by defuzzification. Simulation shows that the navigator is able to perform successful navigation tasks in various unknown environments, and it has smooth action and exceptionally good robustness to sensor noise.
CONCLUSION.
1) PLC control structure.
This article presents a control of AGV by integrated wireless camera. Overall structure for designing AGV is described. Control of AGV motion is implemented by using PD control scheme. Driving axis and steering axis are separated to implement the motion control. In this research, we equipped wireless camera for controlling AGV in remote area. Position and orientation movements are estimated and control by human eyes for manual mode. Position and orientation of AGV are measured the motion control system. Simulation by using Matlab/Simulink software is used for verifying the design control parameters of AGV. Result of experiment is tested for example with specified design Scurve
shape of AGV movement. We conclude that the vehicle can reach from the stating position to the target position with accurate location and can control AGV with fast scanned environment by using wireless camera for investigation and the freedom of operation. Future work is planed to increase the automatic agent by extracting the observation technique by image processing unit. Treatment of dynamic model of vehicle is also planed to the next step.
2). Camera control structure.
In order to make this kind of application more robust, several image modalities can be used. In this case a gray scale image and a depth image
were used. These modalities assist each other since the pattern recognition fails in different situations when the gray scale image is used than when the depth image is used. The pattern recognition algorithms are not a silver bullet to all machine vision recognition cases.
Deep image processing knowledge is needed in order to know when and how these algorithms can be used and with what boundary conditions.
When the application is not trivial, as the one discussed here, it is better to turn to a machine vision expert.
Implementing a camera-based Local Positioning System® (LPS) has established its potential as an essential remote supervisor of multiple semi-autonomous AGV, scaled-down within the concept of a large indoor MiniWorld laboratory. With minimal changes in the AgileFrames concepts of individual and fleet control, the extension of the supervisory and localisation task in the real big world would indeed assume
our LPS® substituted by the general free-field modalities offered by GPS, the satellite-based Global Positioning System. In the real world, notably the public domain with a mix of non-systemrelated actors, the semi-autonomous vehicles could additional to GPS incorporate obstacle detection and ranging.
3)Sensor control structure.
There are many factors that need to be taken into account in the development of an autonomous navigation system foroutdoor AGVs. Among them, the design of the vehicle structure, computer architecture, actuation system, it’s control and utilization of various sensors play an important role. Further, it is important that the autonomous navigation system be intelligent to act both reactively and proactively to the changing environmental conditions. The realization of such an architecture involves the development of subsystems for sensing, path planning, localization, local navigation, and path control. A judicious choice of good sensors for determining the internal state of the AGV and the state of the
environment is of paramount importance. This is, especially in an outdoor environment, due to many factors such as weather, changing environmental conditions, road conditions, etc., a few sensors acting independently may not be adequate. Thus, an array of sensors needs to be used with appropriate sensor fusing technology. Advanced non-linear
control methods are a necessity in the controller development as AGV kinematic and dynamic models are extremely non-linear and complex. A local navigation system based on a behaviorist decomposition of tasks is
highly suited given the complexity of the problem of local navigation in the presence of environmental uncertainty and
modeling difficulties.
Other module through we can control AGV speed and position,
For example given belove,
VEHICLE AND CONTROLLER MODULE.
Autmatic control is an important function of the AGV. The main function of the control system is to track a desired path specified by the local navigation system as accurately as possible. The problem of control is compounded by the fact that the model dynamics are highly non-linear and of a higher order. Further, model parameters are either unknown or uncertain, and also subject to variation due to changing
load and environmental conditions. Detailed and accurate kinematic and dynamic models of the non-holonomic AGV were developed for the purposes of designing and testing of suitable controllers and navigational strategies. Despite the model complexity of the AGV, linear PD/PID control methods have been applied yielding adequate performance at slow speeds. To enhance performance, several non-linear methods are being investigated, including input-output feedback linearization and the sliding mode control. Considering the complexity of the vehicle dynamics and high non-linearity, the difficulty of obtaining the actual vehicle dynamic parameter values, the variability of certain model parameters and the human knowledge available on
speed and steering control, fuzzy control methods are also being investigated. The experimental results of such a fuzzy control scheme, which consists of a Lateral Fuzzy Controller (LAFC) and a Fuzzy Drive Controller (FDC) is as
shown in Figure 20.
figure 20:
REFERENCES.
1).PLC control structure.
1. Butdee, S., and Suebsomran, A. (2007). Localization Based on Matching Location of AGV, Proceeding of the 24th
International Manufacturing Conference, IMC24, Waterford Institute of Technology, Ireland, pp. 1121-1128.
2. Butdee, S., and Suebsomran, A. (2006). Learning and Recognition Algorithm of Intelligent AGV System,
GCMM2006, Santos, Brazil, pp.13-72.
3. Butdee, S., Vignat F., and Suebsomran, A. (2006). Self - alignment Control of an Automated Unguided Vehicle,
IDMME06, Grenoble, France.
4. Tomita, K., and Tsugawa, S. (1994). Visual Navigation of a Vehicle Along Roads: The Algorithm and
Experiments, IEEE 1994 Vehicle & Navigation Information Systems Conference Proceedings, pp. 419-424.
5. Seelinger, M, and Yoder, J-D. (2005). Automatic Pallet Engagement by a Vision Guided Forklift, Proceedings of
the 2005 IEEE International Conference on Robotics and Automation , Barcelona, Spain, pp. 4068-4073.
6. Kay G. M., and Lud R. C., (1993). Global Vision for the Control of Free-Ranging AGV Systems, Proceedings of
the 1993 IEEE International Conference on Robotics and Automation, Atlanta, Georgia, Volume 2 , pp. 14-19.
7. Fukuda T., Yokoyama Y., Abe Y., and Tanaka K. (1996). Navigation System Based on Ceiling Landmark
Recognition for Autonomous Mobile Robot- Position / Orientation Control by Landmark Recognition with Plus
and Minus Primitives-, Proceedings of the 1996 IEEE International Conference on Robotics and Automation ,
Minneapolis, Minnesota, pp. 1720-1725.
8. Hayakawa Y., White R., Kimura T., and Naito G. (2004). Driver-Compatible Steering System for Wide Speed-
Range Path Following, IEEE/ASME Transactions on Mechatronics, 9,: 544-552.
2). Camera control structure.
1. Evers, J.J.M., L. Loeve, D.G. Lindeijer (2000). The service-oriented agile logistic control
and engineering system: SERVICES. Logistic Information management, Vol.13, No.2.
2. Furnée, E.H. (1967). Hybrid instrumentation in prosthetics research. In Proc. 7th Int.Conf.
on Medical and Biological Engineering, Stockholm 1967, p.446.
3. Furnée, E.H., A. Jobbágy, J.C. Sabel, H.L.J. van Veenendaal, F. Martin, D.G.W.C.
4. Andriessen (1997). Marker-referred movement measurement with grey-scale coordinate
extraction for high-resolution real-time 3-D at 100 Hz. In (J. Walton ed.) Proc. of SPIE,
Volume 3173, pp 357-369.
5. Lindeijer, D.G. (2000). The new logistics is Integrated, Communicative and Real-time. In
(P.H.L. Bovy, ed.) Proc. TRAIL 6th Annual Congress "Transport, Infrastructure and
Logistics, the Hague, dec. 12, 2000 pp 87-108.
6. Van Baarlen, J.F. (2001). The motion capture system PRIMASNT: Hardware, software and
applications. MSc Thesis, Faculty of Applied Physics (section Signals, Systems and
Control), Delft University of Technology, april 2001, 124 p.
3)Sensor control structure.
1. C. C. Chan, “An Overview of Electric Vehicle
Technology”, Procs of the IEEE, Vol.81, No.9, Sept. 1993,
pp 1202-1213.
2. Y. K. Tham, Sensor Fusion for Four-Wheel Steerable
Industrial Vehicles, MEng Thesis, School of EEE, NTU,
1999.
3. C. T. Goh, Position Estimation of Autonomous Guided
Vehicle, MEng Thesis, School of EEE, NTU, 1999.
4. Y. Wang, A Novel Vision-Based Lane Detection and
Tracking Algorithm using B-Snake, MEng Thesis, School
of EEE, NTU, 1999.
5. Y. Wang, D. G. Shen and E. K. Teoh, “Lane Detection
using Spline Model”, to appear in Pattern Recognition
Letter, 2000, Netherlands.
6. L. Y. Chen, Z. K. Lu and E. K. Teoh, “A novel Affine
Invariant Feature set and its Applications in Motion
Estimation”, Procs ICIP-2000, Sept. 10-13, 2000, Canada.
7. Z. K. Lu and E. K. Teoh, “A Free form Snake-Based
Algorithm for Lane Detection”, Procs MVA -2000, Nov.
28-30, 2000, Japan.
8. Z. Xue, D. G. Shen and E. K. Teoh, “An Efficient Fuzzy
Algorithm for Aligning Shapes under Affine
Transformation”, to appear in Pattern Recognition, 2000.
Thursday, April 1, 2010
MY SEMINAR FOR " AUTOMATED PUSE PLSMA ARC WELDING PROCESS"
Principle of plasma welding
The basic principle of plasma welding is shown in Fig..If a high voltage is applied between the electrode and the workpiece while the plasma gas is flowing, the gas is ionized and becomes conductive,then a plasma arc is generated. Since the plasma arc is restricted by the nozzle, it had higher energy density, compared to MAG and TIG and can be used as a heat source of ultra high temperature (above20,000°C) having high heat concentrating performance. If the plasma current, gas composition, gas flow rate, etc. are controlled selectively, this arc can be used for boring (cutting) and refilling (welding) easily and the cost is less than the laser. These features are basis of the welding system we have developed. For the concrete welding process of the one-side plasmaspot welding, see Fig.
Pictured above is a plasma arc on the left and a TIG arc on the right. Below, the diagram shows the schematic difference between the plasma arc on the left and the TIG arc on the right. Note the cylindricalshape of the plasma arc when compared to the conical shape of Tig arc.
How it work?
A plasma is a gas which is heated to an extremely high temperature
and ionized so that it becomes electrically conductive. Similar to GTAW (TIG), the plasma arc welding process uses this plasma to transfer an electric arc to a workpiece. The metal to be welded is melted by the intense heat of the arc and fuses together. In the plasma welding torch a Tungsten electrode is located within a copper nozzle having a small opening at the tip. A pilot arc is initiated between the torch electrode and nozzle tip. This arc is then transferred to the metal to be welded. By forcing the plasma gas and arc through a constricted orifice, the torch delivers a high concentration of heat to a small area. With high performance welding equipment, the plasma process produces exceptionally high qualitywelds. Plasma gases are normally
argon. The torch also uses a secondary gas which can be argon, argon/hydrogen, or helium, which assists in shielding the molten weld puddle thus minimizing oxidation of the weld.
A plasma welding torch has an intricate internal design to carry plasma pilot gas, shield gas, water cooling hoses, and power cables for plasma pilot arc current and weld current.
Table 1
Comparison of one-side welding methods and their per
Formances
No. Evaluation item Plasma Arc (MAG)
YAG laser
1 One-side weldability Overlapped parts Limited filletwelding Overlapped parts
2 Applicable clearance
between sheets 0 – 2.0 mm 0 – 1.0 mm Max. 0.2 mm
3 Applicable total thicknessof sheets 1.4 – 4.6 mm 1.6 – 1.4 – 3.0 mm
4 Strain Strain is little Thermal strain is left Strain is very little
5 Welding or core wire Core wire is not welded
since it does not contact. Core wire can be welded since it conta Core wire is not welded
since it does not contact
6 Welding strength: JIS
Grade A
------
-----
-----
7 Productivity (Welding time) 1.5 – 3.5 sec/spot 1.5 – 3.5 sec/spot 1.0 – 2.5 sec/spot
8 Equipment cost (Index) 3.2 1 13.6 – 22.7
9 Running cost (Index) 1.9 1 3.7
Table 2
Basic specifications of plasma welding system
Unit Item Specification
Power supply unit Rated input voltage 3-phase 200 V
Rated input power Approx. 32 kVA
Rated output current DC120 A
Torch unit Electrode Embedded tungsten
Nozzle Inside diameter: 2.2
Cooling method Cooling with water
Table 3
Capacity of plasma welding system
Item Capacity
Applicable thickness Max. 4.6 mm
Applicable clearance between sheets 0 – 2.0 mm
Welding speed 2.0 sec (When thickness is 2.0 mm)
Allowable welding angle 0 (Horizontal) – 100°
Gas and plasma properties
Argon, helium, hydrogen, oxygen, nitrogen and carbon dioxide as well as mixtures of
these gases are used for welding, cutting and coating. The meaningfulness of the simulation
results depend strongly on taking appropriate values for the most important parameters of
transport and thermodynamic properties of the gases, which are:
• molar mass,
• density,
• specific heat and enthalpy,
• viscosity,
• radiatiative properties,
• ordinary diffusion coefficient,
• electric conductivity and
• thermal conductivity.
These parameters are in LTE
This penetration-self-adapive can be define here in six part,
Nw we can see step by step
Features and benefits
There are a range of features and benefits to plasma arc
welding. Plasma arc welding features a protected electrode which allows for less electrode contamination. This is especially advantageous in welding materials that out gas when welded and contaminate the unprotected GTAW electrode. Plasma arc welding has forgiveness in arc length changes due to arc shape and even heat distribution. This results in the arc stand off distances not being as critical as in GTAW. Plasma arc welding gives a good weld consistency and no AVC is needed in 99% of applications, sometimes even with wirefeed. The arc transfer is gentle and consistent so it provides for welding of thin sheet, fine wire, and miniature components where the harsh GTAW arc start would damage the part to be welded. Offering a stable arc in welding reduces arc wander so
the arc welds where it is aimed allowing weld tooling in close proximity to the weld joint for optimum heat sinking. There is only minimal high frequency noise used in plasma arc welding to start the pilot arc, thus plasma can be more easily used with NC controls with less fear of the arc starting noise causing glitches in any electrical equipme nt. Another benefit lies in welding applications involving hermetic sealing of electronic components where the GTAW arc start would cause electrical disturbances possible damaging the electronic internals of the component to be welded. Arc energy in plasma welding can reach three times that of TIG welding causing less weld distortion and smaller welds with higher welding speeds. Welding time can be as short as 0.005 sec, ideal for spot welding of fine wires, while the accurate weld times, combined with precision motion devices, provide for repeatable weld start/stop positionsin seam welding. Low amperage arc welding (as low as0.05A) allows welding of miniature components and good control in downsloping to a weld
edge. The arc diameter chosen via the nozzle orifice used assists in predicting the weld bead size.
Applications
The plasma process can gently yet consistently start an arc to the
tip of wires or other small components and make repeatable
welds with very short weld time periods. This is advantageous
when welding components such as needles, wires, light bulb filaments, thermocouples, probes, and some surgical instruments, When dealing with hermetically sealed medical and electronic components, sealed via welding, the plasma process provides the ability to
1) reduce the heat input to the part;
2) weld near delicate insulating seals;
3) start the arcwithout high frequency
electrical noise which could be damaging to theelectrical internals.
A whole repair industry has sprung up to assist companies wishing
to reuse components with slight nicks and dents from misuse or wear. The ability of modern microarc power supplies to gently start a low amperage arc and make repairs has provided users with a unique
alternative to conventional repair and heat treatment. Both the micro-TIG and microplasma welding processes are used for tool, die, and mold repair. For outside edges the plasma process offers great arc stability and requires less skill to control the weld puddle. To reach inside corners and crevices the TIG process allows the tungsten
welding electrode to be extended in order to improve access. In strip metal welding, the plasma process provides the ability to consistently transfer the arc to the workpiece and weld up to the edges of the weld
joint. In automatic applications no arc distance control is necessary
for long welds and the process requires less maintenance
to the torch components.
Friday, October 9, 2009
PRO-ENGGINER
The copying of features has been made more intuitive with a “Windows” type copy and
paste function. With a feature highlighted, pull down the EDIT menu and select
Copy
as
shown below.
Re-opening the EDIT menu
reveals the
Paste
command to
be available. This must be
selected before placement is
considered. What happens next
is dependant upon the feature
being copied, but usually the
dashboard appears, allowing
the new feature to be placed
with respect to new references.
The sketch may also be
varied, as may any other
parameter in the definition.
Note that once a copy has
been made, the original is
removed from the
“clipboard” so that a further
copy requires the feature to
be re-selected.
Note that a copy produced in this way is, by default, independent (ie will not change
automatically if the original feature is altered). If a dependant copy is required then the
Paste Special
option must be selected.
Certain classes of feature (eg patterns) may result in the
Paste Special
option being
mandatory. This simply involves answering 3 questions before proceeding with the
placement of the new feature.
The copy and paste function is also available in assembly models. Highlight the
component to be repeated and select
Copy
then
Paste.
A definition dialogue box appears in
the top right corner wherein new dimensions and references are added. If this fails to
produce the desired placement, then the new component can always be redefined in the
normal way, via the constraints table.
MIRROR
Features may also be mirrored in a simplified way. Highlight the feature and then select
the
Mirror
icon either from the EDIT menu or from the bottom right corner of the
screen.
A mini dashboard appears, on which the only item which must be filled in is the mirror
plane selection.
Note that under Options one can choose whether the copy is dependent
or independent.
Features and Specifications
Advanced Lightin g Capabilities
•
Simulate a wide range of lightin g, such as spotlight, skylight an d
distant light
•
Enhance lighting with High Dynamic Ran ge Image (HDRI) support
•
Vary sh adow softness of each light, for example, by simulatin g sunlight
•
Enable ligh t attenuation for real-world simulation of light fall off
•
Vary the intensity of each li ght to take into account other lights
in the scene
Simulate a Wide Ran ge of Materials
•
Apply both image maps and procedural maps to a model
•
Utilize bump maps to create relief and to represent material texture
•
Use decal maps when applying an image on the surface, such as
Pro
ENGINEER ARX lets you precisely contro l illumination, color and reflec-
/
a compan y logo
tions to sho w yo ur products in the “best light”.
•
Determi ne the finish of the material –lacquer, satin or shiny
•
Access a standard library of over 200 predefined material types
•
Use dynamic texture placement to preci sely map materials an d
For specific operating system levels, vis it:
finishes to surfaces
w ww.ptc.com
partners
hardw are
cur rent
suppor t.htm
/
/
/
/
Defin e the Pro duct En viro nment
•
Set the floor, wall and ceiling position, and apply the
T he Pro
ENGINEER Advantage
/
appearance scheme
•
Snap the floor, walls or ceilin g to the model
Pro
ENGINEER is simple to learn and use, and is available in a
/
•
Choose a cylindrical or rectangular room
variety of packages designed to meet your compan y’s specific
•
Use real-time rendering to view the room
needs. Whether you need a cost -effective 3D CAD system that
•
Reuse predefined settings across multiple models,
contains all the basi c design capabilities, or a comprehen sive
such as lights, rooms an d effects
Product Development System that seamlessly con nects your
extended supply chain, you’ll find exactly what you need in a
Special Effects
single, fully scalable solution. Choose the package that fits your
needs today and as your needs ch ange and grow, you can easily
•
Fog
upgrade to the package that is right for you tomorrow, which
•
Depth of field
leverages the same powerful platform – this means no data trans-
•
Lens flare (best suited for poi nt lights)
lation and a consistent user experience.
•
Ligh t scatter
•
Region rendering
With Pro
ENGINEER associativity, you can res t assured th at no
/
•
Shadow control
matter where you make a change in your design, your changes
are instantly propagated th roughout all downs tream deli ver-
Language Support
ables. An d because Pro
E NGINEER Advanced Rendering
/
•
English, German, French, Italian, Spanish , Japanes e, Chinese
stores all your design material, light, room and environment
(Simplified and Traditional) and Korean
information, a simple rerenderi ng of the design will in stantly
produce a photo-quality representation of your product,
Platform Requiremen ts
updated design with the changes you just made. The in tegration
•
Microsoft Windows (Vis ta and XP)
of Pro
ENGINEER reduces rendering time because you don’t
/
•
UNIX platforms (Solaris and HP -UX)
have to import your model into another application –everything
can be don e within Pro
ENGINEER.
/
In this image, designers
used Pro
ENGINEER ARX
/
to create texture maps that
realistically show carbo n
fiber material on the handle,
and decals that are used
as logos.
RENDER
TO STUDY OF CAD CAM INTEGRATION
1. Introduction
Industrial world has witnessed significant improvements in
product design and manufacturing since the advent of computer-aided
design (CAD) and computer-aided manufacturing (CAM)
technologies. Although CAD and CAM have been significantly
developed over the last three decades, they have traditionally been
treated as separate activities. Many designers use CAD with little
understanding of CAM. This sometimes results in design of nonmachinable
components or use of expensive tools and difficult
operations to machine non-crucial geometries. In many cases, design
must be modified several times, resulting in increased machining lead
times and cost. Therefore, great savings in machining times and costs
can be achieved if designers can solve machining problems of the
products at the design stage. This can only be achieved through the
use of fully integrated CAD/CAM systems.
The need to integrate CAD and CAM has long been recognised
and many systems have been developed. Kalta and Davies [1]
developed an IGES pre-processor to integrate CAD and CAPP for
turned components. In a different work, a rule-based system was
developed for converting engineering drawings into numerically
controlled machine programs for two dimensional punch objects
[2]. Lye and Yeo [3] and Thakar et al. [4] developed integrated
CAD/CAM systems for design and manufacture of turned
components. Bidanda and Billo [5] reported the development of
a system for generation of NC programs for producing countersink
cutting tools.
Feature recognition and feature based design have been used as
by most researchers to bridge the gap between CAD and CAM.
Many used feature extraction method. Examples are rule based [6]
and syntactic pattern based [7] feature recognition. Development of
domain specific feature modeling systems has also been reported.
Examples include OMEGA [8], PRISPLAN [9], CFACA [10],
VITool [11], IFPP [12]. OMEGA uses production rules to define
operation sequences that are then subsequently grouped into set-ups
based on the nature of the tolerances. PRISPLAN, CFADA and
IFPP focus on generating NC programs for machining centers.
Developments of such systems have also been reported by
others [13-19]. However, a fully CAD/CAM integration is not yet
achieved. The process of integration of CAD and CAM has been
relatively slow in comparison with the developments made in
each of these technologies. Researchers believe that the slowness
of the integration over the recent decades is essentially
attributable to the incompatibility of database formats and the lack
of common languages. The developed systems can only be
considered as islands of integration and still there is a missing
link fully integrating these two technologies.
In most of the systems developed, user still must determine
crucial manufacturing parameters such as cutting tools, cutting
speeds, feed rates, cutting depths, etc., requiring expertise and
considerable amount of time. In addition, contributions made to
integrate CAD and CAM systems for milling operation are very
limited, while this operation forms a considerable amount of
machining operations. This paper describes development of an
integrated CAD/CAM system for milling operations. It has been
proven that the use of this system would help reduce machining
leadtimes and cost, and it is believed that this system goes one step
closer towards achieving fully integrated CAD/CAM systems.
2. System developed
An integrated CAD/CAM system for milling operations has been
developed which helps designers to solve machining problems at the
design stage. A methodology has been employed which provides all
necessary information for machining products automatically. Use of
these system results in reduced machining leadtimes and cost through
(1) designing machinable components; (2) using available cutting
tools; (3) improving machining efficiency. The system is menu driven
with a user friendly interface. As shown in Figure 1, the system is
composed of following components: design module, expert system,
machining sequence planning module, optimisation module,
manufacturing module, executive program.
3. Design module
The design module is developed based on feature-based
design approach. It is implemented with AutoLISP programming
language and uses AutoCAD’s solid modeller to create product
models. This module has a feature library that includes two
groups of features, pre-defined and user-defined. Pre-defined
features are divided into protrusions and depressions. Protrusions
include cubic, rectangular, circular, polygonal, elliptical, semicircular,
semi-polygonal and semi-elliptical features. Depressions
include pockets, holes, slots and steps. There could be different
numbers and forms of islands inside pockets. Holes are composed
of normal, counterbore, countersink and flat bottom ones. Slots
have different shapes such as normal, round bottom, U, T and V
shapes. Steps are divided into normal, round bottom and notches.
User-defined features are complex features of common use for a
manufacturing firm that can be created by any combination of
pre-defined features, restored in the feature library, and used
whenever required.
The relationship between different features is represented by
use of a feature tree. This tree represents parent-child relationship
between features where a feature forms its root and other features
form its branches and leaves. A feature can be considered as the
child of another when at least one of its imaginary faces is
coincident with a real face of the parent feature. Feature A becomes
a parent for feature B when there is no tool access to machine
feature B before machining feature A. Each parent feature can have
an unlimited number of child features and all features must have a
parent feature except the root feature. The feature tree can be
organised only when the product design is completed.
The design module uses different co-ordinate frames and
datum points for placement of features in the design model. These
can be defined as:
�� Global co-ordinate frame (GCF) is the original or world coordinate
frame of the component. There is only one GCF for
each component.
�� Local co-ordinate frame (LCF) is the user defined co-ordinate
frame to design different features easier. There may exist no
or several LCFs in the design and each LCF can be defined
with respect to the GCF or another LCF.
�� Datum point (DP) is the reference point of the feature and
there is one DP for each feature. DP determines the position
of the feature with respect to the current co-ordinate frame
that can be either GCF or LCF. In other words, DP can be
considered as a sub-LCF that determines the location of a
feature with respect to the current co-ordinate frame having
the same orientation.
When different co-ordinate frames are employed, the system
must determine the position of each feature with respect to GCF when
it is defined in a LCF and vice versa. It is known that one co-ordinate
frame must be defined with respect to another. A new co-ordinate
frame can be described with a matrix of vectors. In addition, the
sequence of rotations and translations required to relocate from one
co-ordinate frame to another must be described; this can be done with
a transformation matrix. A matrix can be used to represent translation
and rotation because a sequence of translations and rotations can be
combined to produce a complex relocation more easily with matrix
multiplication than with vector addition. For example, if p1 represents
the DP of a feature with respect to the LCF, it can be represented by
p with respect to GCF where:
p = A p1 (when:
p1 = A-1 p (3)
where A is the general transformation matrix and A-1 represents
the inverse of A.
It is important to note that transformation matrix is composed
of four vectors; the first three vectors represent the orientation of
the three axes and the last vector represents the translation of the
origin of the new co-ordinate frame, all with respect to the old coordinate
frame.
In order to allow inversion of the 3x4 transformation matrix,
an additional dummy row is introduced to it and additional fourth
numbers are introduced to the point vectors, which have no effect
on the matrix manipulations. Therefore they become:
When nested LCFs are defined, there will be more than one
transformation matrix. In this case, to transform a point from a nested
LCF to GCF, more than one transformation matrix should be
employed. For example, in the component shown in Figure 2, LCF1 is
defined with respect to GCF, LCF11 is defined with respect to LCF1
and the DP of counterbore hole is defined with respect to LCF11. If p1
represents the DP of counterbore hole with respect to LCF11, it can be
represented by p with respect to GCF where:
p = A1 A2 p1 (5)
where A1 and A2 represent transformation matrices to transform
a point from LCF1 to GCF and from LCF11 to LCF1, respectively.
Similarly, DP of the counterbore hole can be transformed from
GCF to LCF11where:
p1 = A1
-1 A2
-1 p (6)
4. Expert system
In most machining activities, the selection of cutting tools and
determining machining parameters are still carried out manually by
expert operators using extensive searching through catalogues and
manuals, which requires expertise and time. Therefore, automation
of these functions can be considered as essential for developing
fully integrated CAD/CAM systems. In this work, an expert system
has been developed which is capable of performing the abovementioned
functions. It assists user in designing machinable
components, selection of cutting tools considering available tools
resources, and determining initial machining parameters at the
design stage. To accomplish this task, the system uses information
restored in a number of databases and a knowledge base. Totally
seventeen databases have been developed. Of these, nine databases
are devoted to restoring the information of different types of cutting
tools (face mills, end mills, drills, etc.). The other eight databases
restore appropriate machining parameters (cutting speeds, feed
rates, depths of cut, cutting fluids, etc.) recommended by machining
handbooks and based on the type of operation, quality of the cutting
tool and quality of workpiece material. Use of these databases in the
expert system has many benefits such as:
�� A limited number of rules have been developed for the knowledge
base. A large number of rules should have been developed to
determine required cutting tools and machining parameters for the
operations if there were no databases linked to the system.
�� Decreased required time for developing the knowledge base.
�� Decreased running time for the expert system.
�� Readability and easy accessibility to the information restored
in the databases for updating purposes. This is specially useful
for updating the information of available cutting tools since
some of those may not be available occasionally.
The input to the expert system includes geometric
characteristics of the operation and mechanical characteristics of the
workpiece material. Its output includes appropriate cutting tools and
machining parameters for the operation such as cutting speed, feed
rate, depth of cut, and appropriate type of cutting fluid if required.
These parameters will be determined for each step of machining
operation. When a non-machinable feature is placed on the design,
the expert system uses technological expertise restored in the
knowledge-base, and issues a warning message to the user. It also
assists user to redesign the feature in order to make it machinable.
An example of non-machinable features is a pocket with sharp
corners. In this case the system warns the user and requests him/her
to define a fillet radius for the pocket no smaller than the radius of
the smallest available end mill that can machine the feature.
As shown in Figure 3, the inference engine plays a central role in
the expert system developed. It uses backward chaining where it
actively integrates expertise rules restored in the knowledge base as
if-then rules, with tooling and machining information restored in the
databases. In order to reach its goal, the inference engine
systematically searches for new values to assign to appropriate
variables that are present in the knowledge base. Therefore, it has the
capability of adding to the known store of knowledge. Inference
engine fires appropriate rules from the knowledge base and extracts
necessary information from the databases to select appropriate cutting
tools and determine machining parameters for the operations.
5. Optimisation module
For improving machining efficiency in this system, an optimisation
module has been developed which determines optimum machining
parameters for each step of machining operation. This module is
activated only if user wishes to use optimum machining parameters
rather than those recommended by handbooks. Cutting speed, feed rate
and depth of cut have the greatest effect on the success of a machining
operation. Accordingly, these parameters have been considered as
variables in developing the optimisation models. The maximum profit
rate has been selected as the default objective function in this work.
Maximum machine power, required surface finish and maximum
cutting forces have been considered as the constraints. These models
and constraints have been programmed in FORTRAN, and the
optimisation method of feasible directions has been used to solve the
problems. It is noteworthy that details of the optimisation module
developed and its models have been published elsewhere [20].
6.Machining sequence planning
module
Machining sequence planning is one of the most important
functions in process planning activities. An algorithm has been
developed for automatic machining sequence planning of the
components designed by this system. The algorithm is based on the
bilateral precedence between machining operations and generates
feasible and optimal machining sequences, reducing machining cost
and time. It puts machining operations in a definite order based on
technological and geometric considerations such that the number of
necessary tool changes becomes minimal. This algorithm has been
described elsewhere [21]
7. Manufacturing module
The system restores technological data determined by
different components in the manufacturing data file (MDF) for
use by the manufacturing module. MDF provides all necessary
data for each step of machining operations that are required for
NC program generation. An IGES file generated by the CAD
system provides the geometric data. It is noteworthy that IGES is
the most common method for data exchange in current CAD
systems. Using these data, the manufacturing module generates
required tool paths for each step of machining operation and
determines all cutter locations. User can either accept generated
tool paths or modify them. Upon confirmation of generated tool
paths, the required NC program is generated using an existing
post-processor. The NC machine to produce the product can then
use the generated program.
8. Executive program
The executive program plays a central role in the system
developed, as shown in Figure 1. The user is in touch with all
components of the system only through this program. Upon running
the system, user gets in touch with the executive program where
he/she should determine the desired job from displayed menus. The
executive program communicates with user and manages the
operations of all components of the system, activates the design
module for designing the product, and at the same time, helps the
user in designing machinable features using the expertise rules
restored in the knowledge base of the expert system. It also
communicates with the expert system for determining cutting tools
and initial machining parameters. Communication with the
machining sequence planning module and optimisation module can
be mentioned as other functions of the executive program. It
collects data generated by all components of the system and restores
them in a file using a specific format called ‘design representation
scheme.’ This scheme is developed based on a group of LISP codes
that represent mechanical components as a related set of features.
Each component is represented using a group of pre-defined feature
codes, together with mechanical information of the workpiece
material, and geometric and topological information of its features
in the following form:
( (mechanical_data)
(geometric_data)
(co-ordinate_frames)
(feature_tree) )
This scheme gives a complete and unambiguous representation
of the product that can be used for different purposes such as
verification of generated process plans, NC program generation,
fixture design and so on. Details of this representation scheme can
be found elsewhere [19]. Based on the information restored in this
file, the executive program generates the required MDF file to be
used by manufacturing module for NC part program generation.
9. Case study
A number of test components have been designed and
produced using the system developed. Results showed significant
improvements in machining times and costs in design and
machining of these components. For example, in producing the
sample part shown in Figures 2 and 4 machining time and cost
were reduced by 38% and 42% respectively. This resulted in
a significant improvement in the total profit rate of about 350%,
which increasing it from $0.71/min to $2.49/min.
Fig. 4. The sample part dimentioned
10. Conclusions
In this work several goals have been achieved in the line of
developing fully integrated CAD/CAM systems. Different
components required for developing such systems have been
identified and various problems that arose in the development of
these systems have been dealt with, leading to an adequate basis for
complete integration of CAD and CAM technologies. The system
developed allows simultaneous generation of all information
required to satisfy machining requirements of the design such as its
machinability and availability of the required tooling resources. It
thus integrates different areas of design and manufacturing, giving
each area an appreciation of its role in the design process. It has
been proven that use of this system results in considerable
improvements in machining efficiency, time and cost.
Although much of the work described here goes beyond the
scope of published literature, however, it should be noted that the
system developed couldn’t be considered as a complete solution to
the CAD/CAM integration problem. Further work requires
including other manufacturing activities that are considered in
concurrent engineering concept. In this direction, further integration
of the system developed with systems such as MRP, MRP II and
assembly sequence planning packages are highly desirable.