Conference Proceedings - RTMT-09
Conference Proceedings - RTMT-09
National Conference on
“RTMT ‘09”
Organized by
8. SMART MATERIALS
1
Department of Mechanical Engineering, St.Xavier’s Catholic
College of Engineering, Chunkankadai-629807,
kanyakumari,email: [email protected]
2
Department of Mechanical Engineering, St.Xavier’s Catholic
College of Engineering, Chunkankadai-629 807,
kanyakumari,email: [email protected]
ABSTRACT
1. INTRODUCTION
The world’s fossil fuel production, which meets about 80% of our energy
requirements today, will start to decline in near future. On the other hand, since the
demand for energy is ever increasing as the nations of the world try to better their
living standards. Researches regarding their conversion into usable forms of energy
are being accelerated. Since fossil fuels cause great damage to the environment
through the greenhouse effect, ozone layer depletion, acid rains, air pollution, oil
spills. etc., the research emphasis is on the clean energy sources and carriers. A
quick look at the currently available alternatives, they are found to be classified into
two main categories such as Long term alternatives and Short term alternatives.
Liquefied petroleum gas, natural gas, alcohol and many other hydrocarbon fuels are
considered among the short term solutions since they are finite in nature and are
derived from sources that are finite and suffering from overstress and exhaustion.
Hydrogen, on the other hand, represents the long-term solution due to its unique
properties. It is produced from variety of energy sources such as water, solar,
nuclear and fossil. It also can be converted to useful forms of energy efficiently and
with least detrimental environmental effect. In spite of the numerous advantages of
hydrogen, still more researches have to be performed to optimize the engine design
for hydrogen.
2. HYDROGEN PROPERTIES
Some of the key overall properties of hydrogen that are relevant to its
employment as an engine fuel are listed in Table 1.
Table 1 - H2 properties relevant to ICEs
Property Hydrogen CNG Gasoline
3
Density (kg/m ) 0.0824 0.72 730a
Flammability limits (volume
4–75 4.3–15 1.4–7.6
% in air)
Auto ignition T in air (K) 858 723 550
Min. ignition energy (mJ)b 0.02 0.28 0.24
Flame velocity (m/s) 0.38 b 1.85 0.37–0.43
b
Adiabatic flame T (K) 2480 2214 2580
b
Quenching distance (mm) 0.64 2.1 c
≈2
Stoichiometric fuel/air ratio 34.48 14.49 14.7
Stoichiometric vol. fraction 29.53 9.48 ≈ 2d
Lower heating value (MJ/kg) 119.7 45.8 44.79
Heat of combustion (MJ/kg
3.37 2.9 2.83
air )b
(a Liquid at 0˚C;b At stoichiometric; c Methane; d Vapor; e At 25˚C and 1 atm).
3. AIR FUEL RATIO
The stoichiometric or chemically correct A/F ratio for the complete combustion
of hydrogen in air is about 34:1 by mass. This means that for complete combustion,
34 pounds of air are required for every pound of hydrogen. This is much higher than
the 14.7:1 A/F ratio required for gasoline. Since hydrogen is a gaseous fuel at
ambient conditions it displaces more of the combustion chamber than a liquid fuel.
Consequently less of the combustion chamber can be occupied by air. At
stoichiometric conditions, hydrogen displaces about 30% of the combustion
chamber, compared to about 1 to 2% for gasoline. Because of hydrogen’s wide
range of flammability, hydrogen engines can run on A/F ratios of anywhere from 34:1
(stoichiometric) to 180:1. The A/F ratio can also be ex-pressed in terms of
equivalence ratio, denoted by phi (Φ). Phi is equal to the stoichiometric A/F ratio
divided by the actual A/F ratio. For a stoichiometric mixture, the actual A/F ratio is
equal to the stoichiometric A/F ratio and thus the phi equals unity (one). For lean A/F
ratios, phi will be a value less than one. For example, a phi of 0.5 means that there is
only enough fuel available in the mixture to oxidize with half of the air available.
Another way of saying this is that there is twice as much air available for combustion
than is theoretically required.
Depending on how the fuel is metered, the maxi-mum output for a hydrogen
engine can be either 15% higher or 15% less than that of gasoline if a
stoichiometric air/fuel ratio is used. The theoretical Maximum power output from a
hydrogen engine depends on the air/fuel ratio and fuel injection method used. In a
gasoline-fuelled engine, the volume occupied by the fuel is about 1.7% of the mixture
whereas a carbureted hydrogen engine, using gaseous hydrogen, results in a power
output loss of 15%. It means that under stoichiometric air/fuel ratio condition,
hydrogen will displace 29% of the combustion chamber leaving only 71% for the air.
As a result, the energy content of this mixture will be less than it would be if the fuel
were gasoline. Since both the carbureted and port injection methods mix the fuel and
air prior to it entering the combustion chamber, these systems limit the maximum
theoretical power obtain-able to approximately 85% of that of gasoline engines.
Figure 3- Combustion Chamber Volumetric and Energy Comparison
for Gasoline and Hydrogen Fueled Engines
For direct injection systems, which mix the fuel with the air after the intake valve has
closed (and thus the combustion chamber has 100% air), the maximum output of the
engine can be approximately 15% higher than that for gasoline engines.
Figure 4- power output for port Injection of hydrogen and direct injection of
hydrogen at various speeds
5.2. Improved Mean Effective Pressure
Since external injection of hydrogen displaces a noticeable amount of
air the indicated mean effective pressure is below than that of gasoline operation.
Direct injection also provides
the means of operating at higher relative air/fuel ratio. The relative air/fuel ratio can
be expressed with the following equation,
5.3 Emission
A NOx emission enhances the motivation for the direct injection method. With
direct injection the engine-out emissions can be distinctly reduced at high engine
loads by delaying the start of injection. Based on the discoveries BMW has found,
direct injection of hydrogen into the Combustion chamber may provide the means to
increase engine efficiency, and decrease emissions while maintaining an optimal
level of power output.
Figure 6 - NOx Emission Vs Indicated Mean Effective Pressure
6. CONCLUSION
From this study it is expected that hydrogen-fueled engines of the future will
be based on DCI technology. And by this method the power output of a direct
injected hydrogen engine is 15% more than for a gasoline engine and 35% more
than hydrogen port fuel injection engine. It also solves the problem of pre-ignition in
the intake manifold and reduces pre-ignition within the combustion chamber during
compression. With Direct injection the engine-out emissions can be distinctly
reduced.
7. REFERENCE
2. Hamori, F. and Watson, H. C., “Hydrogen Assisted Jet Ignition for the
Hydrogen Fuelled SI Engine”, paper presented at the “World Hydrogen
Energy Conference No. 15”, Lyon, FR, May 2006.
ANNA UNIVERSITY
It is helpful to see an example of project tracking that does not include earned
value performance management. Consider a project that has been planned in detail,
including a time-phased spend plan for all elements of work. Figure 1 shows the
cumulative budget for this project as a function of time labeled PV). It also shows
the cumulative actual cost of the project through week 8. To those unfamiliar with
EVM, it might appear that this project was over budget through week 4 and then
under budget from week 6 through week 8. However, what is missing from this chart
is any understanding of how much work has been accomplished during the project. If
the project was actually completed at week 8, then the project would actually be well
under budget and well ahead of schedule. If, on the other hand, the project is only
10% complete at week 8, the project is significantly over budget and behind
schedule. A method is needed to measure technical performance objectively and
quantitatively, and that is what EVM accomplishes.
Consider the same project, except this time the project plan includes pre-
defined methods of quantifying the accomplishment of work. At the end of each
week, the project manager identifies every detailed element of work that has been
completed, and sums the PV for each of these completed elements. Earned value is
also commonly calculated as Percent Complete times Budget at Completion (BAC),
This accumulation is called "earned value" (EV), and it can be computed monthly,
weekly, or as progress is made.
Earned Value (EV)
Figure 1.2 shows the EV curve along with the PV curve from Figure 1.1. The
chart indicates that technical performance (i.e., progress) started more rapidly than
planned, but slowed significantly and fell behind schedule at week 7 and 8. This
chart illustrates the schedule performance aspect of EVM. It is complementary to
critical path or critical chain schedule management.
Figure 1.3 shows the same EV curve with the actual cost data from
Figure 1.1. It can be seen that the project was actually under budget, relative to the
amount of work accomplished, since the start of the project. This is a much better
conclusion than might be derived from Figure 1.1.
Figure 1.4 shows all three curves together – which is a typical EVM line chart.
The best way to read these three-line charts is to identify the EV curve first, then
compare it to PV (for schedule performance) and AC (for cost performance). It can
be seen from this illustration that a true understanding of cost performance and
schedule performance relies first on measuring technical performance objectively.
This is the foundational principle of EVM.
Figure 1.4 EVM with Planned value, Earned value and Actual cost
Once Earned Value and Planned Value are known, they can then be used to
determine schedule and cost variance, and calculate performance efficiency.
Variance Calculations
• Schedule Variance (SV) = Earned Value – Planned Value.
The difference between what was planned to be completed and what has
actually been completed as of the current date.
• Cost Variance (CV) = Earned Value – Actual Costs.
The difference between the work that has been accomplished (in Rupees)
and how much was spent to accomplish it.
In the graph below, the project shown has a negative Schedule Variance,
because it has “earned” less value than was planned, as of the current date.
However, it has a positive Cost Variance, because the Earned Value is greater than
the Actual Costs accrued:
The Schedule Performance and Cost Performance Indices not only monitor
current project performance, they can also be used to predict future performance
trends.
Work
breakdown Plann Earne Performance
Sl.No. Cost Variance
structure ed d index
element
Earne Sched
Budge Actua Schedule
d Cost variance Cost ule
t l Cost variance
value
(Rs.) (Rs.) (Rs.) (Rs.) % (Rs.) % CPI SPI
(PV) (EV) (AC) (EV- (CV/E (EV-PV) (SV/P (EV/AC (EV/PV
AC) V) V) ) )
1 Pre pilot plan 63,000 58,000 62,50 (-) (-) 7.8 (-) 5,000 (-) 7.9 0.93 0.92
0 4,500
2 Checklists 64,000 48,000 46,80 1,200 2.5 (-) 16,000 (-) 1.03 0.75
0 25.0
3 Curriculum 23,000 20,000 23,50 (-) (-) (-) 3,000 (-) 0.85 0.87
0 3,500 17.5 13.0
4 Mid-term 68,000 68,000 72,50 (-) (-) 6.6 0 0.0 0.94 1.00
Evaluation 0 4,500
5 Implemen- 12,000 10,000 10,00 0 0.0 (-) 2,000 (-) 1.00 0.83
tation Support 0 16.7
6 Manual of 7,000 6,200 6,000 200 3.2 (-) 800 (-) 1.03 0.89
practice 11.4
CHAPTER 4
If the implementation of EVM is not scaled to match the size and complexity of
the project at hand, it may be either too lightweight (e.g. not standard-compliant) or
too costly. The benefits of any implementation should far outweigh its cost of
implementation and maintenance. Thus, EVM is a project management discipline
that should pay for itself many times over.
EVM has no provision to measure project quality, so it is possible for EVM to
indicate a project is under budget, ahead of schedule and scope fully executed, but
still have unhappy clients and ultimately unsuccessful results. In other words, EVM is
only one tool in the project manager's toolbox.
The use of EVM presumes that stakeholders care about measuring progress
objectively. If a project team does not want to measure performance objectively, or if
the organization is performing EVM just to fulfill a customer requirement, EVM is
unlikely to help.
CHAPTER 5
CONCLUSION
Earned value Analysis is a better method of program/project management
because it integrates cost, schedule and scope and can be used to forecast future
performance and project completion dates. It is an “early warning” program/project
management tool that enable managers to identify and control problems before they
become insurmountable. It allows project to be managed better – on time, on
budget.
Introduction
Although more than 150 CAPP systems have been reported in the literature, only
a few have considered the optimization of operation sequencing and the
alternative sequence of operations have used a precedence matrix in operation
sequencing for prismatic components after analyzing the technological and
feasible constrains. Optimization of the sequence of minimum cutting tool-
change and tool-travel times. Two important issues mentioned in the latter work
are elimination of infeasible machining operation sequences and the use of tree
structure for enumering all the paths for weeding out the infeasible sequences.
The need for using heuristic approaches for randomly generating the alternative
sequences and thereby alternative process plan is stressed in research works.
As the operations sequencing problem involves a large number of interacting
constrains, it is very difficult to formulate and solve the sequencing problem using
dedicated search techniques like integer programming, branch and bound and
dynamic programming methods. Different search methods are represented in
Figure 1.
Figure 1: Search Techniques
Genetic Algorithms
Replacement
Offspring Population of
candidate Fitness Evaluate
solutions funct fitness of
Mutuation
individuals
ion
Crossover
Goal
Paren
Selection reache
ts d?
of parents
Figure 3: Working principle of GA’s
The simulation in the Metropolis algorithm calculates the new energy of the
system. If the energy has decreased then the system moves to this state. If the
energy has increased then the new state is accepted using the probability returned
by the above formula. A certain number of iterations are carried out at each
temperature and then the temperature is decreased. This is repeated until the
system freezes into a steady state. This equation is directly used in simulated
annealing, although it is usual to drop the Boltzmann constant as this was only
introduced into the equation to cope with different materials. Therefore, the
probability of accepting a worse state is given by the equation
Case Study
1 A1 –DRILLING OF HOLE
2 B1 – ROUGH FACING
3 B2 – FINISH FACING
4 C1 – COUNTER BORING OF HOLE
5 D1 – DRILLING OF HOLE
6 D2 – ROUGH BORING OF HOLE
7 D3 – FINISH BORING OF HOLE
8 E1 - CHAMFERING
Table 2: Operations
1 2 3 4 5 6 7 8
(A1) 1 __ 100 100 1 100 100 100 100
_
(B1) 2 11 __ 0 100 1 100 100 100
_
(B2) 3 11 100 __ 100 1 100 1 1
_
(C1) 4 100 100 100 __ 100 100 100 100
_
(D1) 5 11 1 100 100 __ 0 100 100
_
(D2) 6 11 1 100 100 100 __ 100 100
_
(D3) 7 11 100 100 100 100 100 __ 100
_
(E1) 8 11 1 100 100 100 100 1 __
_
change, tool change, set-up change and machine change (table: 1). For the present
problem, the string (chromosome) is represented by a collection of eight elements
(genes) corresponding to sixteen features (operation) given in the precedence cost
matrix of the given part as 1,2,3,4,5,6,7,8.
The initial population cannot consist of simple random generated string, as the
local precedence of operation/features for each form feature cannot be guaranteed.
To create a valid initial string, an element of the string is generated randomly, from
the first operations of each form feature group (to follow the nature flow of the
operations) and the procedure is replaced by selecting elements from the remaining
operations group until all the operations are selecting elements are represented in
the string. Each string in the population should contain eight elements corresponding
to sixteen operations. The first elements of the string is generated randomly from
the first selected randomly from these five features, then the second elements of the
string has to be generated randomly from the same form features. Then any one of
the form feature is selected as the second element, the third element of the string is
selected form the form feature. This process is repeated until all the elements of the
stings are filled from the first elements of the remaining form feature groups.
Similarly, other string of the population is generated keeping their local operation
precedence.
The objective of the sequencing problem is to get an optimal operation
sequence that results in minimum production cost form the given precedence cost
matrix (PCM). The objective function is calculated for each string in the population
as the sum of the relative costs between pairs of features (operations). The relative
costs will correspond to the number of tasks that need to be performed in each
category of attribute such as machining parameters change, tool change, set-up
change and machine change and the type of constrains one features has, with
respect to the other. The fitness value of each string are calculated and the expected
count of each string for the next generation is obtained. This is represented as
follows in table 3.
The actual count of each string is obtained based on the string weight age (survival
of the fittest) so that the total count becomes the population size. This genetic
operator is used to generate a new population, which has better string that, the old
population. The selection of the better string is based on the actual count arrived in
the earlier step. The reproduced population is called parent 1 and is used for the
next genetic operation, i.e. crossover. This population is shown in the first column of
(table: 4). In this analysis, a new cross over is designed to ensure the local
precedence of operation and generating a feasible offspring. To produce a feasible
offspring, two parents are randomly selected from the population. Based on the
string length, two crossover sites are randomly generated to select a segment in one
parent between these crossover sites are randomly generated to select a segment in
one parent between these crossover sites. The offspring, child1, is generated by
arranging the elements of the selected segment in this parent according to the order
in which they appear in the other parent with the order of the remaining elements
being the same as in the first parent. The role of these parents will then exchange in
order to generate another offspring, child 2. The crossover operator can be
illustrated as follows.
Conclusion
After all, if there were no limits on execution time, one could always perform a
complete search, and get the best possible solution. Most stochastic algorithms can
do the same, given unlimited time. In practice, there are always some limits on the
execution time. So there is a need of an efficient search technique like GA, SA etc.
Optimization of all process planning is one of the duties of the CAPP system. Most
of the optimization system related to process planning application has been
developed as off-line system such that they cannot be used as integrated module
within process planning packages. Therefore, optimization system need integrated
with CAPP system. The importance’s of AI techniques on the optimization of CAPP
functions have been proven by this research project too. The potential and power of
AI is very great and it is believed with that exploitation of AI methods, with this it is
possible to increase its capabilities of IMS’s. GA’s has the advantage of rapid
reaching to the region which includes the global optimum due to their parallel
structure. However, the most important drawback of the GA is that it is easily trapped
in local optima. A mixed methodology can be used to increase the performance of
the GA, by coupling the parallel computing ability of GAs with the advantages of the
SA which attempts to escape local optima. So a hybrid technique has been
developed in order to overcome the drawbacks and decrease the precious
computational time.
References
[3] Damon Cook, New Mexico State University, Computer Science Department,
Evolved and Timed Ants: Optimizing the Parameters of a Time-Based Ant System
Approach to the Traveling Salesman Problem using Genetic Algorithm.
* Corresponding Author,
Mail: [email protected]
Mobile: 9865357377
Abstract
1. Introduction
Quick Field can solve both linear and nonlinear magnetic problem. Magnetic
field may be induced by the concentrated or distributed currents, permanent or
external magnets. This problem describes the non linear magnetic field. Solenoid
actuators are used for many of the applications .The major applications of solenoid
actuator includes valves, water relays, switches etc... The Specific applications of
solenoid actuators are automatic door locks and office equipment, printer, electric
locks, photographical, optical, medical instrumentation, and automatic teller
machines.
Solenoid valves come in various configurations and size, solenoid valves can
be of normally open, normally closed, or a two way valve type.
2.1. Selecting a Solenoid actuator
Force requirements
Electrical requirements (current driving actuator etc.)
Duty cycle
Maximum envelope dimensions
Temperature extremes
Termination requirements
Dimensions
The field strength H and the flux density B are related by the magnetic permeability
of the substance that the field is in.
3. Problem formulation
The problem taken is the plunger movement of solenoid actuator. In this case
the plunger movement will be controlled by using the label mover in the “Quick field”.
This is used to calculate the mechanical force, flux density and many other
parameters.
3.1. Methodology
The geometry of the model is drawn with specified units using Cartesian
coordinate by taking the grid as the reference. The drawn geometry is enclosed in
a close loop as shown in the FIG.2. Then the mesh is created on the geometry.
Labels are assigned to the geometric objects describing the material properties,
sources and boundary condition.
Figure 2 Quick field grid distributions for the enclosed loop array
B. Material Property
The material property for the geometry of the model is given. In the outer
surface of the model, air is acting. So for air, coil & plunger,
The permeability, μ= 1
C. Loading Source
The two types of loading sources available in the Quick-field software are,
Field source.
Conductor connections.
The loading source that we choose for our analysis is of the field source type.
Current density is the loading source that is available in the field source type. The
current density for iron and the plunger is given below.
D. Boundary conditions
The various boundary conditions that are available in the Quick-field software
are,
Magnetic potential.
Tangential field.
Zero normal flux.
Here,
Where,
A o=0
After describing the problem it is solved and the results are obtained.
E. Post Processing
The output results such as mechanical force, magneto motive force, magnetic
flux, surface energy, average surface potential, line integral of flux density, and
surface integral of strength are obtained. The typical solid actuator magnetic field is
shown in FIG. 3. The flux density of the actuator is shown in FIG.4. The movement
of the plunger is given in FIG.5.
Flux Density
B (T)
0.1130
0.1017
0.0904
0.0791
0.0678
0.0565
0.0452
0.0339
0.0226
0.0113
0.0000
The mechanical force F, flux density B and the strength D, for the solenoid
valve is obtained and listed in Table 1. The B-H curve for the core and the plunger
is shown in the FIG.6. The plot describing the flux density Vs Length of plunger is
shown in FIG. 8.
Flux
Mechanical
S.No. Step Strength(A\m) Density(Wb)
Force, F(N)
6. Reference
[1].D. F. Ostergaard, "Magnetic for static fields", ANSYS revision 4.3, Tutorials,
1987.
[2]. K.kowalenko,"Savings lives, one landmine at a time "the institute, IEEE, vol28,
1, March 2004.
[3]. Cornelis J.Kikkert," A low cost multifrequency landmine detector ",James cook
University ,Queensland,Australia,4811.
ABSTRACT
A wall climbing robot intended for painting, insception and cleaning application
has been developed. The robot has characteristics features of kinematic design and
is capable of moving a tool at a specific speed on complex surface. In real field
condition the labour intensive inception demands a great attention since it is
subjected to human errors and limited reliability. The robot totally uses two actuator
and four suction cups. This robot two degree of freedom on the wall, has
successfully tried to achieve a semi autonomous robot for industrial application,
Submitted by:
D.SUDARSAN.B.E.M.B.A.M.M.M.
III YEAR
MADURAI.
---------------------------------
Placement Officer
VIRUDHUNAGAR
Mobile: 9442325078
9842981838
Email: [email protected]
Guided by:
Dr.A.ASHA.M.E.Ph.D
Head of the Department - Mechanical
PROFESSOR/MECHANICAL
D.SUDARSAN1
III YEAR M.E. (Manf). – Part time, KLN COLLEGE OF ENGG, Madurai.
Dr.A.ASHA2
H.O.D. – Mechanical, KLN COLLEGE OF ENGG, Madurai.
ABSTRACT
Then chances are good to modify the organization's constraint in such a way that
production (or a production-like operation) is managed aiming for a high profit.
If this is the case, then you will benefit from investigating and implementing a
constraint - based method of production management.
THEORY OF CONSTRAINTS:
Product Mix: Companies often need to determine the quantity of each product to
produce on a monthly basis. In its simplest form, the product mix problem involves
how to determine the amount of each product that should be produced during a
month to maximize profits. Product mix must usually adhere to the following
constraints:
Determination of optimal product mix for maximizing the profit for a sequence
RELATED WORK:
1. Richard Lubbe et.al (1992) compared both ILP and TOC methods for solving
product mix problem. Finally they concluded that TOC methodology produce
the better result than ILP method.
2. B.Ronen et al (1992) proposed the cost utilization model to analysis the
production lines and material flow. This model combines the Parato approach
with the TOC approach.
3. Gerhard. Plenert (1993) compared both the TOC and ILP with their
limitations. Finally he concluded that ILP is much better planning tool and
comes to closer in achieving the goal minimizing the throughput.
4. Godfrey.C Onwhubolu (2001) used the TABU search based TOC product
mix heuristic to identify optimal or near optimal for small to medium size
problems. Finally they concluded that when there are multiple constraint
resources on the product mix problem, the TABU search based TOC
approach for achieving better profit maximization goal than traditional
algorithm.
5. S.Pass et.al (2003) presented a systematic approach for managing the
market –constraint environment using the HI-TECH industry case study. And
finally they suggested that a way to reduce costs in non-critical areas and
stresses the need for lead-time protective buffers.
6. V.J Mabin et.al (2003) investigated the product mix dilemma using variety
TOC approaches that complement and extended traditional treatment such as
ILP, spread sheet and graphical approaches. According to this algorithm they
find the product mix, but which doesn’t satisfy the market demand.
Methodology:
In the existing TOC product mix heuristic, they have considered the
throughput as the difference between the selling price and raw material cost.
(a) Calculate the ratio of the throughput to the product’s constraint hour
(TH/CH).
(b) Arranging in descending order of the product’s TH/CH, reserve the
constraint capacity to build the product until the constraint resource’s
capacity is exhausted.
(c) Planning to produce all the products that do not require processing time on
the constraint resource (bottleneck) in the descending order of throughput
ratio.
My paper involves in modifying the above, and considering more factors to
obtain an optimal solution.
Proposed Methodology:
In reality profit does not depend upon the unit contributory margin i.e. the
difference between the selling price and raw material cost. Hence in the modified
approach, Profit is calculated as the difference between the Sales Value and
Total cost.
The following are the steps involved in Modified TOC product mix heuristic
Step1: Identify the constraint
Case Study:
Thoothukudi (Dt.), 80 km from Madurai., where they are producing four types of
other workstations consist of single machine each. The workstations are arranged as
common to all types. The sales value and demand for each model is different in the
market but the profit ratio is same for all the types of the single end product. Three
shifts per day and 7 day working per week is practiced. (i.e.10080min per week). The
company needs to carryout the manufacturing with optimal product mix that will give
is Rs.18.00 per hour for all type of products. The necessary data is tabulated in table
4.9. Total money available for manufacturing the yarn per week is Rs. 30000.00 only.
Loading and unloading time for each model is 2.5min respectively. Fixed cost is to
Solution:
With the above processing steps of TOC and Modified TOC, Profit is
The profit obtained for the different heuristics and different conditions are
charted in figure 4.1.in this figure conditions are taken in X axis and total profit is
taken in Y-axis. In X-axis number 1 denoted the product mix for the capacity
limitations, 2 and 3 are denoted the money availability with capacity limitations,
money availability and market conditions with capacity limitations respectively.
80000
Total profit inRs
60000 ILP
40000 TOC
0
1 2 3 4 5 6
Different conditions
• Total profit obtained by Modified TOC product mix heuristic is higher than
Existing TOC product mix heuristic and nearer to the ILP profit when capacity
and money constraints are considered.
• Total profit obtained by Modified TOC product mix heuristic is lower than the
ILP profit when the market constraints is also considered. This may be due to
the conflict caused by the existence of multiple managerial constraints.
Conclusion:
The modified TOC heuristic performs better than the original TOC product mix
heuristic. The original TOC heuristic is capable of providing optimal solutions when
only capacity (physical) constraints exist. But the modified TOC product mix heuristic
provides optimal solutions when multiple constraints (physical and managerial) exist.
The profit obtained by modified TOC product mix heuristic is higher than Existing
TOC product mix heuristic and nearer to the ILP profit. When multiple managerial
cases. This may be due to the presence of conflicts caused by the multiple
constraints.
Hence the above modified approach considers all factors hinders profit and
gives the best optimum product mix strategy than the traditional TOC and also it is
easier than the ILP.
REFERENCE:
1. en.wikipedia.org/wiki/Theory_of_Constraints
2. www.goldratt.com
3. www.focusedperformance.com/toc01.html
4. www.dbrmfg.co.nz/
5. Journal of the Brazilian Society of Mechanical Sciences
Print ISSN 0100-7386
SQUARE CAVITY
ABSTRACT:
The paper focuses on simulation of flow inside a lid driven square cavity
containing an incompressible fluid. The governing equation (Navier-Stokes) is solved
numerically. A numerical scheme based on SOLA based algorithm is proposed for
the solution of 2D Stokes equation. The fundamental solutions of the Stokes
equations are adopted as the sources to obtain flow field solutions. The present
method is validated to numerical schemes for lid-driven flows in a square cavity. The
different cases were considered, where the Reynolds number of the flow are varied.
The objective is to choose a numerical scheme as well as the analysis of vortex
formation and to draw the velocity profile at the mid horizontal and vertical section.
Introduction:
A large number of codes have been developed for incompressible flows. It is quite
difficult to categories the various codes developed for incompressible flows. However
these codes may differ in one more aspects over a total of eight parameters.
PROBLEM DEFINITION.
Consider a square cavity with walls on three sides filled with incompressible
viscous fluid. The lid of the cavity moves to the right with uniform speed, parallel to
itself. This movement sets the fluid inside in motion. This problem has been used as
a test case for comparing different numerical methods for solving the incompressible
N-S Equations.
SOLA CODE:
The type of algorithm taken for solving Navier Stoke equation is SOLA based
algorithm. SOLA code is based on a finite difference scheme using an explicit
algorithm. the code was developed in Los Alamos Laboratory by Hirt et al.(1975). In
this method, velocities are computed by solving the momentum equations in an
explicit manner using the velocity and the pressure fields of the previous time step.
The updated velocity field, however, does not satisfy the equation of continuity.
These velocities (and pressure) are then adjusted to satisfy the continuity equation in
an iterative manner.
FORM OF EQUATION:
x-momentum:
∂u ∂u ∂u 1 ∂p ∂2u ∂2u
+u +v =− +υ 2 + 2
∂t ∂x ∂y ρ ∂x ∂x ∂y
y-momentum:
∂v ∂v ∂v 1 ∂p ∂2 v ∂2 v
+u +v =− +υ 2 + 2
∂t ∂x ∂y ρ ∂y ∂x ∂y
Here u, v denote velocity components along x and y- axial directions, p and ρ the
dimensionless pressure and density and υ =1/Re , Re the Reynolds number. For
incompressible flows, governing equations may be used in either conservative or
non-conservative form. For compressible flows, on the other hand, it is desirable to
use equations in conservative form to ensure conservation of mass, momentum and
energy across the shock.
∂u ∂v
The governing equations are ,Continuity: ∂x + ∂y = 0
Boundary conditions:
The number and the type of boundary conditions to be imposed depend on
the mathematical nature of the governing equations. The main boundary conditions
are inlet and outlet boundaries and solid boundary. We took the problem as the two-
dimensional driven cavity. So we consider as that the bottom, right, left are walls and
the top is as inlet and outlet boundary. The lid of the cavity moves to the right with
uniform speed u=1, parallel to itself. This movement sets the fluid inside in motion.
TOP
LEFT RIGHT
BOTTOM
RESULTS:
Stokes flow in a square cavity with the top lid moving with a unit velocity in the
horizontal x-direction is considered. The predicted results for the velocity profile
garph are shown.
FLOW FIELD
FLOW FIELD
FLOW FIELD
CONCLUSION:
REFERENCES:
P.G.GURUSAMYPANDIAN
Assistant Professor
Department of Mechanical
engg,
Kalasalingam University.
Email:
[email protected]
--------------------------------------------------------------------------------------------------
-
ABSTRACT
Now days, smart materials have found an important place in the modern
engineering applications. Smart materials or intelligent materials system include
integration of sensors, actuators and control with a material or structural component
possesses intelligent and life features .The development of smart material is inspired
by the biological structure systems and their basic characteristics of functionality,
efficiency, precision, self - repair and durability. Smart materials are not only singular
materials but also Hybrid composites or integrated systems of materials.
Shape Memory Alloys are one of the major categories of smart materials
which after being strained at certain temperature revert back to the original shape
because of unique properties such as Shape Memory effect, Pseudo elasticity and
high damping capacity. These properties in smart hybrid composites provide them
the tremendous potential for creating new paradigms for material-structural
interactions and demonstrate various successes in engineering applications like
Aeronautical engineering, in medical fields like Vascular stents and Osteosynthesis
etc., and in commercial fields also.
The main advantages of shape memory alloys are, they are Bio-compatible,
strong and good corrosion resistant. They generally have high power to weight
ratio and can withstand large amount of recoverable strain and when heated above
transition temperature, they can exert high recovery stresses of 700MPa which can
be used to perform work.
The smart materials covered in this paper are primarily piezoelectric, shape
memory alloys, electro strictive, optical fibers and magneto strictive. It also deals
with emerging market for smart materials, India’s development in this field & future
perspectives.
1. INTRODUCTION
What is a smart material? We could define it as one whose properties or shape
may change in response to some stimulus from the environment. What makes a
material smart is that changes like this happen by design. Typically they might
respond to stimuli that would leave most materials unchanged, such as exposure to
a particular chemical reagent or to light. Typically the magnitude of their response is
large.
The concept of smart materials may be new, but smart materials themselves go
back a long way. Piezo electrics produce an electrical signal when squeezed. Some
natural minerals are piezoelectric, such as quartz. Smart materials have the potential
to chane engineering, technology and design principles completely. They do away
with mechanical machines as such, and give us a new breed of device for which we
don't yet have a proper word. Smart materials are particularly attractive for doing
engineering on nano scales. It's possible now to make machines like this with
moving parts too small to see with the naked eyes.
The materials, which have the ability to perform sensing and actuating functions
and therefore are capable of imitating living systems are called “smart” materials.
The “I.Q.” of smart materials is measured in terms of their “responsiveness” to
environmental stimuli and their “agility.” The first criterion requires a large amplitude
change, whereas the second assigns faster response materials with higher “I.Q.”
Today the drive to innovation is stronger than ever. Novel technologies and
applications are spreading in all fields of science. Consequently, expectations and
needs for engineering applications have increased tremendously, and the prospects
of smart technologies to achieve them are very promising.
A. Piezoelectric:
B. Electrostrictive:
C. Magnetostrictive :
3. SMART STRUCTURE
1
1 Data Acquisition (tactile sensing):
The purpose of this part is to forward the raw data to the local
and/or central command and control units.
The role of this unit is to manage and control the whole system
by
analyzing the data, reaching the appropriate conclusion, and determining the actions
required.
4. SIGNIFICANCE
Smart materials and systems open up new possibilities, such as clothes that
can interact with a mobile phone or structures that can repair themselves. They also
allow existing technology to be improved. Using a smart material instead of
conventional mechanisms to sense and respond, can simplify devices, reducing
weight and the chance of failure. Smart materials research is of long standing but
commercial exploitation has been slow. The Foresight report concluded that “smart
materials technology provides an excellent opportunity for the UK. However, despite
significant progress over the last five years, supported by various government
programmes, it the UK] remains relatively poorly positioned worldwide”.
5. APPLICATIONS
Embedding sensors within structures to monitor stress and damage can reduce
maintenance costs and increase lifespan. This is already used in over forty bridges
worldwide.
Food makes up approximately one fifth of the UK’s waste. One third of food grown
for onsumption in the UK is thrown away, much of which is food that has reached its
best before date without being eaten.6,7 These dates are conservative estimates
and actual product life may be longer. Manufacturers are now looking for ways to
extend product life with packaging, often using smart materials. As food becomes
less fresh, chemical reactions take place within the packaging and bacteria build up.
Smart labels have been developed that change colour to indicate the presence of an
increased level of a chemical or bacteria. A ripeness sensor for pears is currently
being trialled by Tesco. Storage temperature has a much greater effect than time on
the degradation of most products. Some companies have developed ‘time-
temperature indicators’ that change colour over time at a speed dependent on
temperature, such as the Onvu™ from Ciba Speciality Chemicals and TRACEO® by
Cryolog. French supermarket Monoprix has
been using time-temperature indicators for many years, but they are not yet
sufficiently accurate or convenient for more widespread introduction.
F. Military applications
Smart Skin - In battle soldiers could wear a T-shirt made of special tactile
material that can detect a variety of signals from the human body, such as detection
of hits by bullets
Smart Aircraft- Figure 6 presents a few potential locations for the use of smart
materials and structures in aircraft.
5. FUTURE BENEFITS
The potential future benefits of smart materials, structures and systems are
amazing in their scope. This technology gives promise of optimum responses to
highly complex problem areas by, for example, providing early warning of the
problems or adapting the response to cope with unforeseen conditions, thus
enhancing the survivability of the system and improving its life cycle. Moreover,
enhancements to many products could provide better control by minimizing distortion
and increasing precision. Another possible benefit is enhanced preventative
maintenance of systems and thus better performance of their functions. By its
nature, the technology of smart materials and structures is a highly interdisciplinary
field, encompassing the basic sciences physics, chemistry, mechanics, computing
and electronics as well as the applied sciences and engineering such as aeronautics
and mechanical engineering. This may explain the slow
8. ENVIRONMENTAL RISKS
Smart materials and systems are hugely varied and are applied in a wide
range of fields. It is hard to make generalisations about their environmental impact
as thisdepends on the specific materials and applications.However, recyclability is
not an issue that most researchers are addressing. They believe that smart materials
are either too early in their development or used in such small quantities that this is
not yet an issue.
9. CONCLUSION
Today, the most promising technologies for lifetime efficiency and improved
reliability includes the use of smart materials and structures. Understanding and
controlling the composition and microstructure of any new materials are the ultimate
objectives of research in this field, and is crucial to the production of good smart
materials. The insights gained by gathering data on the behaviour of a material’s
crystal inner structure as it heats and cools, deforms and changes, will speed the
development of new materials for use in different applications. Structural ceramics,
super conducting wires and nano structural materials are good examples of the
complex materials that will fashion nanotechnology. New or advanced materials to
reduce weight, eliminate sound, reflect more light; dampen vibration and handle
more heat will lead to smart structures and systems, which will definitively enhance
our quality of life.
10. REFERENCES
667.
Rajan N.
Mobile: 09360373102
ABSTRACT
This paper attempts to reduce the weight of the total truss structure by using the
optimization technique. The Linear programming technique is used to formulate the
objective function and the constraint variables. The objective function of the truss
structure is to reduce the weight of the total structure by certain values, so that the
cost of the structure is reduced. The forces acting in the members, the allowable
stress of the member, buckling load of the members and the deflection of the
members are taken as the constraints. The problem that is dealt with is that of
minimizing the weight of the truss structure. The most important contribution of this
model is that it can be operated by any individual those who have little knowledge
about the truss structure. This is achieved by running the optimization model on a
user friendly personal computer system and by using the solver tools to analyze the
problem.
1.1. Introduction:
The truss structure is used in many civil engineering applications like bridges,
buildings and roofs. In today’s complex environment, deign engineers are faced with
thousands of daily decisions and they must rely on a myriad of processes and
conflicting data to meet the industries needs. These are the major decisions and if
any one data is not correct then the entire decision will lead to significant
consequences.
Although both these problems attempt to achieve the same objective, the search
space and the optimization algorithm required to solve each problem are different,
hence, we discuss the latter problem.
n = 2j – 3
where,
n – Number of members
j – Number of joints
The failure of the long column takes place due to buckling (or bending) is known as
buckling load or crippling or critical load. The force in the member is compressive in
nature then the buckling condition is applied to that member.
The condition for buckling load for any type of end condition is as follows,
P = π2EI
L2
Deflection:
The member which is subjected to compressive load will undergo deflection. The
deflection in the truss is calculated using the Virtual Load method.
EA
2.1. Optimization
Merriam-Webster defines optimization as “the mathematical procedures involved in
the act, process or methodology of making something as fully perfect or effective as
possible”.
This definition makes clear the appeal of optimization to an engineer. Using scientific
or mathematical procedures to arrive at the perfect or most effective decision offers
the possibility of dramatically improving performance. In practice, optimization has
come to mean packaged software applications that postulate a model for optimal
layout of truss structure, estimate various parameters that govern the behavior of the
truss structure model for each specific instance and then apply a mathematical
technique to determine the best cross sectional value in order to reduce the cost of
the material. Techniques can include linear and non linear programming, dynamic
programming etc.
Then there are other decision problems where mathematical optimization may be
valuable, but where there are multiple objectives, or constraints and trade – off’s that
are difficult to make explicit or quantify. For these decision problems, the
optimization approach may be employed, but it is important to understand its
limitations and couple it with other decision – support methods.
2.3. Linear programming:
a. Decision variable
b. Objective function
c. Constraints
Decision variables: They are the physical quantities that an operations manager can
control. The optimal values of these variables will be determined after solving the
problem through a constrained optimization problem.
Constraints: The practical limitations that restrict the choice of the decision variables
of a problem are stated as constraints. These constraints can be mathematically
represented by less than (<), greater than (>), less than equal to (<=) or equal to (=)
or greater than equal to (>=).
Solvers, or optimizers, are software tools that help users find the best way to allocate
scarce resources. The resources may be raw materials, machine time or people
time, money, or anything else in limited supply. The "best" or optimal solution may
mean maximizing profits, minimizing costs, or achieving the best possible quality.
An almost infinite variety of problems can be tackled this way, but here are some
typical examples:
The solver will find values for the decision variables that satisfy the constraints while
optimizing (maximizing or minimizing) the objective.
Linear programming problems -- where all of the relationships are linear, and hence
convex -- can be solved up to hundreds of thousands of variables and constraints,
given enough memory and time. Models with tens of thousands of variables and
constraints can be solved in minutes (sometimes in seconds) on modern PCs.
4. Use the dialogs in Excel, or function calls in the program, to tell the Solver
about the decision variables, objective and constraint calculations, and
desired bounds on constraints and variables.
5. Click Solve in Excel, or call optimize () in the program, to find the optimal
solution.
Within this overall structure, it has a great deal of flexibility, either in a spreadsheet or
in a custom program, in how to choose cells or variables to hold the model's decision
variables and constraints, and which formulas and built-in functions to use. Since
decision variables and constraints usually come in groups, we want to use cell
ranges in your spreadsheet, or arrays in your program to represent them.
4. Analysis:
Consider the five bar truss structure, which carries a load of 1 KN as shown in the
figure.
30 2 60 5 30
5m
1 KN
7.5 m
Objective Function:
Once the connectivity of the truss is given, the cross-sectional area and the material
properties of the members are the design parameters. Let us choose the cross-
sectional area of the members as the design variables. There are five design
variables, each specifying the cross-section of a member (A1 to A5). This completes
the first task of the optimization.
Constraints:
1. The material strength for all elements is Syt = Syc = 500 MPa, and
2. The modulus of elasticity E = 200 GPa.
The force in the members is found out using the method of joints.
1 0.666 KN Compressive
2 0.5767 KN Tensile
3 1.1547 KN Tensile
4 1.334 KN Compressive
5 1.155 KN Tensile
A1
A2
A3
A4
1.155 < Syt
A5
The other set of constraints arises from the stability consideration of the compression
members 1 and 3. Realizing that each of these members is connected by pin joints,
we can write the Euler buckling condition for the axial load in members 1 and 3 as
follows,
18.75
6.25
δmax = 2 mm
By using the Castiglianos’s theorem and by the Virtual load method, the deflection
constraint is obtained as follows,
Subjected to
Syc - 0.666 ≥ 0
A1
Syt - 0.5767 ≥ 0
A2
Syt - 1.1547 ≥ 0
A3
Syc - 1.334 ≥ 0
A4
Syt - 1.155 ≥ 0
A5
πEA12 - 0.666 ≥ 0
18.75
πEA22 - 1.1547 ≥ 0
6.25
This shows the formulation of the truss structure problem. The above NLP is solved
using the solver tool in the MS – Excel sheet.
5. Conclusion:
Senthil Kumar .N *
[email protected], 9994322766
ABSTRACT
The main objective of this work is to design and development of a new ergonomic
grass-cutting tool in the agriculture land fields. In the agriculture land grass cutting is
very essential, because it affects the growth of the crops. Existing grass cutting tools
are not more user friendly to the farm workers because they has to bend their back
and have to cut the grass, thus causes lack of efficiency of workers towards the
work. To increase the efficiency and reduce the time taken to cut the grass, the new
concept tool has been taken into account for this work. The new concept tool will
reduce the time, and increase the efficiency .The workers need not to bend their
backs during grass cutting. The new tool has sharp edges in the bottom position long
lengthy portion handle is introduced, so grass is more viable.
The tool has the shape of ‘ L ‘ with the edge being curved. A handle is
placed on the top, for having a complete grip of the tool. The handle is made up of
fiber material, so that the stress on the human hand is much reduced. The material
required is low as compared with other standard models. The fabrication process is
simple, as compared with other models. The tool is formed by the bending
operation’s steel rod is formed to the required shape using the ‘V’ bend in the
mechanical or hydraulic press. The tip of the tool is sharpened at the two side edges
and the tip as well. The cost of the tool is low as compared to other models. The
application of the tool for the agriculture purpose is easy, when compared with other
tools because, the worker do not bend while using the tool, it increase the worker
productivity.
Since the all these edges are sharpened and is backed up with enough load the
output is increased. (Grass cutting). It can be handled easily by all the persons
without any difficulty (Tool).
KEYWORDS: ANTHROPOMETRY, DESIGN, CONCEPT GENERATION, AND
WORKERS PRODUCTIVITY
ABSTRACT
The effect of windbreak walls on the thermal performance of
natural draft wet cooling towers (NDWCT) under crosswind has been investigated
numerically. The three dimensional CFD model has utilized the standard k–e
turbulence model as the turbulence closure to quantify the effects of the locations
and porosities of the wall on the NDWCT thermal performance. Moreover, the
improvement in the NDWCT thermal performance due to windbreak walls has
been examined at different crosswind directions. Results from the current
investigation have demonstrated that installing solid impermeable walls in the rain
zone results in degrading the performance of the NDWCT. However, installing
solid walls at the inlet of the NDWCT has optimized the natural draught cooling
tower performance at all of the investigated crosswind velocities. Similarly,
installing walls with low porosity has shown improvement in the performance of the
NDWCT. A reduction of 0.5–1 K in the temperature of the cooling water coming
from the tower to the condenser has been achieved at all of the investigated
crosswind velocities by installing porous walls both inside and outside the rain
zone.
1. INTRODUCTION
A natural draft wet cooling tower (NDWCT) is the
cornerstone of the cooling system in use in large modern thermal power plants. In
NDWCT, a combination of heat andmass transfer effects is used to cool the water
coming from the turbine’s condenser. The hot water, coming from the condenser,
is sprayed on top of splash bars or film fills in order to expose a very large portion
of water surface to the cooling ambient air. The moisture content of the cooling air
is less than the moisture content of saturated air at the hot water temperature,
which results in evaporating an amount of water. The energy required for
evaporation is extracted from the remaining water, hence reducing its
temperature. The cooled water is then collected at the basin of the NDWCT and
pumped back into the condenser, completing its circuit.
2. EXPERIMENTAL APPROACH
GOVERNING EQUATIONS
In FLUENT , the air flow is solved as a continuous phase
using the Eulerian approach. However, droplet trajectories are solved as a dispersed
phase using the Lagrangian approach The air flow equations that describe heat,
mass and momentum transfer can be written as a general equation having the form
of:
Ρmauф-Ґф^ф = Sф + Spф
where qmais the moist air density, u is the velocity vector, /is the scalar
quantity for u, v, w, T, Yv, k and e, C/is the diffusion coefficient, S/is the source term
for the air phase and Sp/is the additional source due to the interaction between the
air and the water droplets. According to the Lagrangian reference frame, the
equation of motion relates the water droplet velocity toits trajectory.
BOUNDARY CONDITIONS
FILL ZONE:
The main characteristics of any film fill are the heat and mass transfer
in addition to the pressure drop within it. The heat and mass transfer are presented
via heat and mass transfer coefficients. The pressure drop, on the other hand, is
presented via a pressure loss coefficient. Because of limitations in the current CFD
code, the water flow at the fill zone has been approximated by droplets flow instead
of film flow
PRESSURE LOSSES
As the air flows through the NDWCT, it su ffers pressure
losses that can be expressed in terms of a pressure loss coefficient, the air density
and the perpendicular velocity component across the surface as defined in Eq. (10).
The main pressure losses throughout the NDWCT are caused by the shell supports,
fill, water distribution pipes and drift eliminators. Pressure losses due to the drag
force from water droplets at both the rain and spray zones are calculated internally
by FLUENT.
WINDBREAK WALLS
Windbreak walls have been used for centuries to reduce wind speed,
to control heat and moisture transfer and to improve climate and environment.
However, only within the last few decades have systematic studies considered the
aerodynamics and shelter mechanisms of shelterbelts windbreak walls. The primary
effect of any windbreak wall is to reduce the wind speed. Throughout the current
paper, different windbreak walls have been examined.
Windbreak walls have been installed both inside and outside the
NDWCT. The dimensions and the geometry of both the windbreak walls and the
NDWCT and listed in Table 1 In the following sections, the effects of wall location,
porosity and wind direction on the thermal performance of the NDWCT represented
by change in water temperature due to crosswind (DTwo) are investigated..
TABLE:1
Inside Outside
CASE
α KL α KL
0.00 0.00 ∞
CD_3 ∞
0.6
CD_5 0.53 11.0 5.6
CD_6
0.53 11.0 0.7 2.2
CONCLUSION:
REFERENCES
Study of a proposed 200 m high natural draught cooling tower at power plant
Frimmersdorf/Germany. BUSCH D. (1) ; HARTE R. (2) ; NIEMANN H.-J.
Optimization of cooling tower shells using a simple genetic algorithm -
Institute of Structural Mechanics, Department of Environmental Engineering,
Cracow Academy of Agriculture, Al. A. Mickiewicza 24, PL–30-059 Cracow,
Poland¶e-mail: [email protected], PL
1
Thermal optimization of a natural draft wet cooling tower -N. Williamson
, M. Behnia 2 , S. W. Armfield 1
Shape Optimization, Design and Construction of the 200m Niederaussem
Cooling Tower Shell - by Reinhard Harte, Wilfried B. Krätzig, and Ulrich
Montag section 26, chapter 2, (doi 10.1061/40558(2001)53)
M.NARASIMHARAJAN
ABSTRACT
Abstract
To validate Finite Element models, test data, e.g. from an experimental modal
analysis, may be utilized. An important requirement in dynamic analysis is to
establish an analytical model capable of reproducing the experimental results. For
this purpose, experimental modal analysis and finite element models that describe
the behaviours of the structure in terms of frequencies and mode shapes were
compared. Many model updating methods [4] have been developed, but model
updating by artificial neural network has been developed in the last decades only.
One unique feature of neural network is that they have to be trained to functions. In
developing an iterative neural network methodology, it has been number of
parameter to be updated increases. To reduce the number of training samples and
to obtain a well trained neural model, orthogonal array method is developed [1, 5].
Training the neural network using these samples becomes a time-consuming task.
In this paper, we investigate the use of orthogonal arrays for the sample selection.
The results indicate that the orthogonal arrays method can significantly reduce the
number of training samples without affecting too much the accuracy of the neural
network prediction.
1. INTRODUCTION:
Model updating is done by modifying the mass, stiffness, and damping parameters of
the FE model until an improved agreement between FEA data and test data is
achieved. Unlike direct methods, producing a mathematical model capable of
reproducing a given state, the goal of FE model updating is to achieve an improved
match between model and test data by making physically meaningful changes to
model parameters which correct inaccurate modeling assumptions. Theoretically, an
updated FE model can be used to model other loadings, boundary conditions, or
configurations without any additional experimental testing. Such models can be used
to predict operational displacements and stresses due to simulate loads.
In choosing updating parameters, the following parameters are widely used for
updating the model based on the sensitivities of the total parameters of the structure
or by pre-known assumed parameter by the analyst:
(a) Material Properties – Young’s modulus (isotropic or orthotropic), Poisson’s
ratio, shear modulus and mass density.
(b) Geometrical Element Properties – Spring stiffness, plate thickness and beam
cross-sectional properties.
(c) Lumped Properties – Lumped stiffness (boundary conditions) and lumped
masses.
(d) Damping Properties – Modal damping, Rayleigh damping coefficients, viscous
and structural damper values.
2 NEURAL NETWORKS
Fortunately, Artificial Neural Networks (ANN) offer solutions to problems that are
very difficult to solve using traditional algorithmic decomposition techniques. The
potential benefits of neural nets are:
Learning from the interaction with the environment
Few restrictions on the functional relationships
An inherent ability to generalize training information to similar situations
Inherently, they ensure parallel design and load distribution.
Neural network has the ability to derive relations from complicated or imprecise data,
can detect trends that are too complex for human to recognize by any other
computing technique. Neural network uses a training rule where the weight and
biases of the connections adjust based on the outcome. A trained NN is an expert on
the information it analyses. One of the inherent strengths of a NN is its ability to
forecast or predict an outcome.
The MATLAB neural network toolbox [4] was used to perform the network analysis.
The tool box contains necessary functions for generating the network algorithm. The
function performs the network generation, network training, pre-processing data in to
the NN, and post-processing of data coming out of the network.
This process as shown in figure (2) begins by feeding the measured dynamic
characteristics Xm into an NN model which is trained beforehand. The outputs of the
NN model are the identified structural parameters Yi. These identified structural
parameters are then fed into the finite element (FE) model to produce a set of
calculated dynamic characteristics Xc. A comparison between the calculated
dynamic characteristic Xc and the measured dynamic characteristics Xm is made. If
these two sets of parameters differ significantly, then the NN model will be retrained
on-line using adjusted parameters differ significantly, then the NN model will be
retrained on-line using adjusted training samples that contain Xc and Yi. The
retrained NN model is then used to identify the structural parameters again by
feeding in the measured dynamic characteristics Xm. This identification and on-line
retraining procedure is repeated until the difference between Xc and Xm becomes
insignificantly small or until Yi converges. At the end of the iteration the final
identified parameters are guaranteed to produce the dynamic characteristics that are
very close to the measured ones. When compared to the original design, these
structural parameters can be used to infer the location and the extent of damage in
the structure.
3. ORTHOGONAL ARRAYS
Orthogonal Arrays (often referred to Taguchi Methods) are often employed in
industrial experiments to study the effect of several control factors.
Table 1
Orthogonal array OA (4, 3, 2, 2)
Factor
Response
Tests A B C
(results)
1 0 0 0 R000
2 0 1 1 R011
3 1 0 1 R101
4 1 1 0 R110
As an example, Table 1 shows the orthogonal array OA(4, 3, 2, 2) that outlines four
experiment runs for three 2-level factors (A, B, and C) with strength 2. The response
or the results of the experiments are also attached in the last column of the table.
The levels of factors are indicated by 0 (for low level) and 1 (for high level). This OA
has four rows and three columns (excluding the response column). Each row
represents a test setup with specified factor levels. It can be seen that each column
(factor) contains two level 0 and two level 1 conditions. Note that any two columns in
this OA have the same level combinations (0,0), (0,1), (1,0) and (1,1). Thus, the
three columns in this OA are orthogonal to each other. This orthogonality provides a
fully balanced experimental arrangement which is comprehensive in terms of test
results and efficient in terms of the number of tests required.
For instance, after performing these four experiments, the response for low level
factor A, RA0, and the response for the high level of factor C, RC 1, can be found,
respectively, as
RA0 = (R000 + R011)/2, RC1 = (RC011 + R101)/2.
3.1 EXAMPLE
Example 1:
Consider the process of mixing concrete; we have a choice of different mixtures of
sand, cement and water and we do not know which to choose. We decide to try two
different levels of each, as listed below:
C1 = 1Kg of cement
C2 = 1.5Kg of cement
S1 = 500g of sand
S2 = 750g of sand
W1 = 1 litre of water
W2 = 2 litres of water
We can try every combination of sand, cement and water and test each different
combination to see which is the hardest. If we do this, there will be a total of eight
combinations.
Taguchi experiments reduce the number of experiments required to find the best
levels for each factor. The method works by calculating the statistical properties of
orthogonal arrays.
We can draw up a table for the cement mixing example, with 3 factors (cement, sand
and water) and 2 levels (for each) in an orthogonal array.
Table 2:
Trial Number Factors
C S W
-------------------------------------------------------
Y1 1 1 1
Y2 1 2 2
Y3 2 1 2
Y4 2 2 1
Where 1 and 2 are the levels of each factor. For example, in trial 1, we make our
mixture with all the ingredients at level 1.
A full set of experiments for this process would require eight different experiments (=
23) as opposed to the four which are needed for the Taguchi version of the
experiment. The saving involved in using the Taguchi method becomes more
significant as the number of levels or factors increases [1].
To analyze the results, we must have a way of finding which experiment produced
the best answer. In our example, we would have to measure the hardness of the
cement. Assume that a lower result indicates harder cement. (In Neural Network
terms, we would find the error associated with each experiment. The lower the error
is, the better the result.)
So having undertaken the experiments and obtained the results, we can now
calculate the best levels to use with each factor. Let us assume, for example, that
the results obtained are as shown below:
Table 3:
We can find the effect of each level in each factor by averaging the results which
contain that level and that factor.
C1 = (Y1 + Y2) / 2 = (11+20) / 2 = 15.5
C2 = (Y3 + Y4) / 2 = (5+7) / 2 = 6
S1 = (Y1 + Y3) / 2 = (11+5) / 2 = 8
S2 = (Y2 + Y4) / 2 = (20+7) / 2 = 13.5
W1 = (Y1 + Y4) / 2 = (11+7) / 2 = 9
W2 = (Y2 + Y3) / 2 = (20+5) / 2 = 12.5
Combination of factors: C2, S1, W1. These are the factors which produce the
lowest results and hence the hardest mixture.
Example 2:
(Refer: Taguchi Techniques for Quality Engineering by Phillip J. Ross, Page No:
279)
TRAIL NO 1 2 3 4
1 1 1 1 1
2 1 2 2 2
3 1 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 3 1 3 2
8 3 2 1 3
9 3 3 2 1
50
40 3-DColumn1
30
1
20
2
10
0 3
Wt.Distri Stabilizer NoseLeng WingAngle
Fig. 3: Mean
60
50
40
1
30
2
20
3
10
0
Wt. Distri Stabilizer Nose Leng WingAngle
Fig. 4: Variance
60
50
40
1
30
2
20
3
10
0
Wt. Distri Stabilizer Nose Leng WingAngle
4.0 CONCLUSION
This paper presents in developing an iterative neural networks technique for model
updating of structures, it has been shown that the number of training samples
required increases exponentially as the number of parameters to be updated
increases. It is noted that the selection of training samples for NN models resembles
the design of experiments which involve several factors varying with several levels.
The orthogonal arrays have bee developed and adopted by the experimentalists for
laying out a minimal number of tests while retaining all the necessary information. In
this study, we investigate the use of orthogonal arrays the sample selection for
training NN models.
It is concluded that the use of orthogonal arrays method can significantly reduces the
number of training samples without affecting too much the accuracy of the neural
network prediction.
ACKNOWLEDGMENT
REFERENCES
Journals/Periodicals:
5. C.C. Chang, T Y P Chang and Y G Xu, ‘Adaptive neural networks for model
updating of structures’ Hong Kong University of sciences and technology,
Smart Mater. Struct. 9(2000) 59-68
Conference Proceedings:
Books:
N.Vasirajaa, K..Alagurajab,
a
Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi.
b
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.
ABSTRACT
This paper describes the cement dust exposure, its health
effects and control of cement dust in cement industry (packing plant section).
Exposure of cement dust has long been associated with the prevalence of
respiratory symptoms and varying degrees of airway obstruction in man. Apart from
respiratory diseases, it was also found to be causes of lung problems,
gastrointestinal tumours and dermatitis .By design and fabricate an optimum portable
fabric bag filter for collect a cement dust. In these filters current flow that includes
gas and dust cross through the pores are located in the stuff filter and filtrate by that
remaining on the bag. Afterward, by dust increase on the bag, the filter is shaken
until dust collecting leads to the exit hopper. In order to obtain a better operation
after introducing to the operation mechanism, the same steps like create good
situation for maintain for easy maintenance. The pressure in various part of filter
system are controlled. Also by installing a shaking system, the shaking periods of
bags were increased. In order to increases dust cake layer and better performance
of deducting and bags life time. The fabric filter bags made from cotton cloth. The
fabric filter absorbs soft micron particles with considerable operation.
Key words: Bag filter, cement dust, shaking system, dust collection.
1. INTRODUCTION:
Cement is widely used in construction. The cement is
manufactured by the combination of calcium, silicon, iron and aluminium compounds
in the form of limestone and clay. In cement production plant, due to corrosion,
grinding, discharge, replacement, baking materials in furnace and its movement
inside furnace and so on dust in produced. Origin of dust in a cement production
process in different section such as preparation of raw materials, raw materials
grinding, clinker cooler, final milling. Packing and loading sections. The cement dust
form the acute respiratory symptoms are Cough, Shortness of breath, Wheezing,
Stuffy nose, Runny nose, and Sneezing. It also form chronic respiratory symptoms
were chronic cough, Chronic sputum production, Dyspnoea, Chronic bronchitis. In
cement packing plant, the cement bags are loading into a rotary packing machine by
manually. So the workers are affected by the cement dust. The above cement dust
risk has been control by design of safe work environment with fabric bag filter. The
bag filter absorbs the cement dust from the packing plant atmosphere. The tubular
bag, filter the cement dust from air and the cement dust reloaded into the rotary
cement packing machine.
Irritant dermatitis is caused by the physical properties of cement that irritate the
skin mechanically. The fine particles of cement, often mixed with sand or other
aggregates to make mortar or concrete, can abrade the skin and cause irritation
resulting in dermatitis. With treatment, irritant dermatitis will usually clear up. But if
exposure continues over a longer period the condition will get worse and the
individual is then more susceptible to allergic dermatitis.
Fig 1.
3. MATERIALS AND METHODS:
FABRIC BAG FILTER
The bag filter is commonly known as baghouses, fabric collectors
use filtration to separate dust particulates from dusty gases. They are one of the
most efficient and cost effective types of dust collectors available and can achieve a
collection efficiency of more than 99% for very fine particulates. The design of fabric
bag filter, consider one of the important factor is rate of filtration which shown
velocity of guided polluted air to filters inside fabric in such a way that five parameter;
type of dust, its application, temperature, size of dust and density. Dust-laden gases
enter the baghouse and pass through fabric bags that act as filters. The high
efficiency of these collectors is due to the dust cake formed on the surfaces of the
bags.
The fabric primarily provides a surface on which dust particulates
collect through the following four mechanisms,
1. Inertial collection - Dust particles strike the fibers placed perpendicular to the gas-
flow direction instead of changing direction with the gas stream.
2. Interception - Particles that do not cross the fluid streamlines come in contact with
fibers because of the fiber size.
3. Brownian movement – Submicrometre particles are diffused, increasing the
probability of contact between the particles and collecting surfaces.
4. Electrostatic forces - The presence of an electrostatic charge on the particles and
the filter can increase dust capture.
Fig 2.
4. RESULT:
In fabric filter bags top end is mounted on the springs. The
shaker mechanism is arrange reciprocating and also vertically. So the shaker
mechanism is more effective. In the bag filter contain 49 bags around 5 cm diameter
and 100 cm length. The dust gas passes through the bottom of bags.
For continuous improvement of fabric filters of plant, the following recommendation
should be followed;
1. Executing continuous plan of repairs and preventive maintenance.
2. Periodical measuring of static pressure in determines points and control of
pressure fault in each limit.
3. Periodical measuring of output dust from fabric filters.
4. Periodical review of fans position for providing suitable pressure for absorbing
polluted air inside filter.
5. CONCLUSION:
The equipment found more effective in reducing the respirable
dust in the cement packing plant work environment. There by reducing the risk of
dust inhalation by workers. This equipment found to reduce occupational diseases in
future, from present level.
6. Acknowledgement
References:
1. J. Mwaiselage, B. Moen, “Dust Exposure and Respiratory Health Effects in the
Cement Industry”
TIRUCHENGODE.
PAPER PRESENTED ON
Submitted by
M.P.VENKATESH J.KARTHIKEYAN
[email protected] [email protected]
9003656082 9894082785
ABSTRACT
Driving in the mountains can be a wonderful exhilarating experience, but it can also
be tiring and cause extra wear and tear on your vehicles. This will lead to many
accidents due to the loss of control of vehicles. This paper deals with technology
which is helpful for the vehicles moving in the mountain areas.
This system based on ratchet mechanism. The ratchet is fixed firmly to the
rear wheel or rotating axis. Plunger is fixed to stationary part of vehicles so
that the plunger is made contact with the ratchet. If it is case, vehicles stops in
the steep slopes and starts moving downwards, the plunger will lock the
ratchet and stops the vehicle suddenly.
1. Ratchet
Construction:
Ratchet wheel is attached to the rear wheel or rotating axis of the vehicles
.Rear wheel chosen because during up moving, weight of the vehicle is acting
towards the rear wheel, so it has high contact with roads. The low tensioned
spring actuated plunger with electromagnet is fixed to any stationary part of
vehicle which is nearer to the wheel. Electro magnet is connected to the
switch and battery. Speed sensor is placed at the wheels and the circuit is
connected with electromagnet.
Ratchet mechanism:
Electro
plunger magnet
Spring
Lever
plunger
Cable
When the switch is on. Electromagnet gets demagnetized and the plunger is
released due to spring force which made a contact with ratchet which is
attached firmly to the rear wheel or rotating axis. Ratchet is designed so that it
allows the forward motion only.
During forward motion, plunger moves upwards due to the design of ratchet.
When reverse motion takes place, tooth of the ratchet hits the plunger which
will not allow the ratchet to move in reverse direction, which in turn it arrests,
the wheel motion.
If the vehicle attains certain speed, speed sensor is attached to the wheel will
indicate circuit and allows the current to flow to the electromagnet which gets
magnetized and pulls the plunger. So that the noise produced during the
contact between ratchet and plunger made when vehicles runs at certain
speed is avoided.
Wire arrangements can also be used to hold the plunger continuously which is
not in use.
This system is useful for both two wheelers and four wheelers.
In case of two wheelers, the ratchet is attached to the rear wheel firmly, the
spring actuated plunger operated by electromagnet is attached to any
stationary parts which is nearer to the wheel, so that can able to make
contact with the ratchet.
In four wheelers, ratchet us attached to the rear wheel rotating axis. Similarly
plunger arrangements are attached to any stationary part.
For heavy load vehicles multiple plungers can be used to withstand the loads.
Advantages:
1. Simple in design.
2. Suitable for both two and four wheelers. Especially in two wheelers people
can balance the vehicles easily in the steep slopes.
3. Wear and tear of the engine is reduced, because if the vehicles move
backwards in slope, engine has to extert more torque on the wheels to
overcome it, with the help of the system vehicle can move suddenly as it locks
the vehicle at that position.
4. No need to hold the brake in the slopes, this reduces the wear of the brake
shoes and increases its life.
ELECTROMAGNT
OR
ELECTRONIC LEVER
CIRCUIT ARRANGEMENT
WHEEL OR
ROTATING RATCHET
AXIS
Construction:
First the gear1 is attached firmly to the front wheel or front wheel rotating axis.
Another gear2 is attached to the one end of crank shaft as shown below. Piston is
connected with crank shaft with help of connecting rod.
Piston and
cylinder
Connecting
rod
Gear 2
Crank shaft
Gear 1
wheel
Gear 2
Gear 1
Working:
When the wheel rotates, gear1 attached to the front wheel or wheel rotating axis also
rotates. Front wheel is chosen because during down hills weight of the vehicles acts
towards the front wheel so it has high contact with road. With the help of lever
assemblygear2 is engaged with the gear1 which in turn rotates the crank shaft. This
will move piston up and down. During downward motion of the piston, inlet valve gets
opened and atmospheric air is sucked and occupies the space in the cylinder. when
the piston starts moving upwards , inlet valve gets closed and air inside the cylinder
will be gets compressed until it reaches certain pressure, if it reaches the maximum
pressure pressure relief valve which is fitted at the outlet gets opened and allows air
to flow in to atmosphere. Due to compression piston motion resisted which in turn
resist the gear1 motion which is attached to the wheel or wheel rotating axis. Due to
this process speed of the wheel gets reduced.
During suction process, piston moves suddenly and in compression, piston moves
slowly. This will create an unbalanced reduction in speed i.e., wheel rotates fastly
during suction and slowly during compression. In order to avoid that two pistons and
cylinder is attached to the same crank shaft.
In this one piston will do the suction process and another piston will do compression
process. So that the speed is reduced uniformly in the wheels.
Lever assembly to engage and disengage gears
Advantages:
1. During down hills, no need to keep the engine in on condition. So the wear and
tear of the engine is avoided.
2. We can able to ride the vehicle in neutral position and easy to handle vehicles.
4. Speed is reduced drastically without the help of gear box and engines.
Of course, the most important technology for mountain driving is relaxed and has
fun. At the same time life is a precious one and we must save it and enjoy it.
Always remember
IF SAFETY IS NOT PRACTICED,
IT WON’T BE USED
Reference:
1. www.thomasnet.com
2. www.wikipedia.com
M.GIRIDHARAN R.VENNANGKODI
[email protected] [email protected]
9894082785 9791336410
ABSTRACT:
FUTURE DEVELOPMENT:
So far we implemented our project by using relay switch and D.C motor. In
future for exact operation the engagement and the disengagement should be
controlled by using microcontroller.
Using the 89c51 microcontroller we could make the relay circuit for first and
reverse gear operation. And for other gears we could use speed control circuit.
Here we could use stepper motor as the actuator which may get the signal from the
microcontroller. Then the stepper motor may actuate the master cylinder and slave
cylinder
INTRODUCTION:
Modern cars are having all possible features in its counterpart. So as to make
driving easy. Power assisted control mechanism is generally used in cars at the lost
of fuel efficiency. Electronically controlled devices are mandatory in a car like Marti
for its steering. This is enhancing comfort and nor affecting its fuel performance. In
this way, this paper has been proposed and devised to retrofit with any car.
A handy switch provided with gear shift lever will operate a motor to control
the functions of hydraulic cylinders by means of relay and two sensor switches.
Hence fluid pressure produced in the hydraulic cylinders; force the piston against
spring force. This directing the clutch to engage and disengage for which fluid line
will be short circuited.
This mechanism is unique in nature trouble free, cost effective and smart for
its perfection.
INTRODUCTION OF CLUTCH:
The power developed inside the engine cylinder is ultimately aimed to turn the
wheels so that the motor vehicle can move on the road. The reciprocating motion of
the piston turns a crankshaft rotating the flywheel thought the connecting rod. The
circular motion of the crankshaft is now to be transmitted to the rear wheels. It is
transmitted though the clutch, gearbox, universal joints, propeller shaft or drive shaft,
differential and axles extending to the wheels. The application of engine power to the
driving wheels. The application of engine power to the driving wheels though all
these parts is called power transmission. The power transmission system is usually
the same on all modern passenger cars and trucks, but its arrangement many vary
according to the method of drive and type of transmission units.
Figure shows the power transmission system of an automobile. The motion of
the crankshaft is transmitted though the clutch to the gear box or transmission, which
consists of a set of tears to change the speed. From gearbox, the motion is
transmitted to the propeller shaft though the universal joint and then to the differential
through another universal joint. Universal joint is used where the two rotating shafts
are connected at an angle for power transmission. Finally, the power is transmitted
to the rear wheels while the vehicle is taking a turn. Thus, the power developed
inside the cylinder is transmitted to the rear wheels though a system of transmission.
The vehicle which have front wheel drive in addition to the rear wheel drives
include a second set of propeller shafts, universal joints, final drives and differentials
for the front units.
DISENGAGEMENT OF CLUTCH:
While the driver changing the gear, he has to switch on the handy to energies
the relay. So that, the energized relay operates the motor. The motor which in turn
pulls the lever attached to the pushrod of the master cylinder.
The fluid comes out from the master cylinder reaches the slave cylinder under
certain pressure. The pressurized fluid pushed the slave cylinder pushrod which is
connected with the clutch release fork lever. So, the disengagement of clutch takes
place.
ENGAGEMENT OF CLUTCH:
After changing the gear the driver has to switch off the handy switch. So that
the relay will change its polarity (i.e. reversed). Then the motor rotates in opposite
direction. Mean while, clutch release fork releases by means of spring force. The
fluid from the slave cylinder and master cylinder return backs to the reservoir. So the
engagement of takes place. Cylinder push rod to control the motor ON/OFF
condition. Battery supplies the power supply for the whole unit
WORKING:
SYSTEM CONFIGURATION:
Master cylinder
Slave cylinder
Reservoir tank
Stepped motor
Relay unit
Sensor switches
Battery
SET-UP:
A handy switch is provided with the gear shift lever. And it is connected to the
relay unit. The relay unit controls the motor.
A master cylinder is connected to the motor unit by means of its pushrod lever
and the slave cylinder is connected to the clutch unit by means of its pushrod lever.
Both the master and slave cylinders are fitted near by the clutch unit and are
supplied with fluid oil by means of a reservoir tank. Two sensor switches are provide
near the master
RELAY:
The detailed construction of the master cylinder had been shown in figure. In
engaged condition when the clutch fork in the released position, the push rod rests
against its stop due to the pedals return spring. Also the pressure of master cylinder
spring keeps the plunger in its back position. The flange at the end of the valve
shank contacts the spring retainer. As the plunger has moved to its rear position, the
valve shank has seal lifted from its seal and seal spring compressed. Hydraulic fluid
can then flow past the three distances pieces and valve seal in either direction. This
means the pressure in the slave cylinder then is atmospheric and the clutch remains
in its engaged position.
However when the released fork is pressed to disengage the clutch, the initial
movement of the pushrod and plunger permits the real spring to press the valve
shank and seal against its seat. This disconnects the cylinder from the reservoir.
Unlike cables, hydraulic operation does not involve frictional wear, especially
when subjected to large forces. Due to this reason hydraulic operation is particularly
suitable for heavy duty application, i.e., on large vehicles.
SENSOR SWITCH:
The sensor switch is used by the control of motor in on or off. Switch are
provide in motor movement path
STEPPER MOTOR:
DIODES:
The electrons will be attracted towards positive terminal of the battery which
away from the diode junction, obviously there would be any electron (i.e. current)
flow through the diode in this condition called the reversed bias.
APPLICATIONS:
Car
Jeep
Van
Heavy duty vehicles
Bus
Lorry
Trucks
ADVANTAGES:
Simple in design
More effective
Smooth in operation
Required less manpower
Comfort driving
Easy maintenance
Very useful for physically unable person
CONCLUSION:
By implementing our project in vehicles, it’s very useful for physically unable
persons. It will be very convenient for them to drive the vehicle and also ease to
change the gears. By taking as a base, analyzing it, we can also control brake,
accelerator by electronic means.
----------------------------------------------------------------------------------------------------------------
Volumetric positional accuracy is a relative error between the cutting tool and
the work piece is created, constitutes a large portion of the total machine tool error
during machining. The extent of error in a machine gives a measure of its accuracy.
In principle, there are two strategies to improve the accuracy of multi axis machine
[6].
1. Error Avoidance
- By increasing the precision in manufacturing, and
- By specific design improvements.
2. Software error compensation
- By using software correction for systematic geometric errors, and
- By on-line computational corrections for changing geometric and
thermally induced errors.
The general approach towards building accurate machine tools is to apply
error avoidance techniques during its design and manufacturing stage so that the
sources of inaccuracies are kept to be minimum. However, this approach involves a
high degree of investment as machine cost rise exponentially with the level of
accuracy involved. Such machines also tend to be frequently over-designed. The
other technique, namely that of error compensation for more accurate machine at
lower cost [7].In master part tracking approach, the machine probe is used to track a
master component such a circular disc , a ball bar etc , instead of measuring the
individual errors and generating the mathematical representation. This is a quick way
to assess the machine volumetric error [5]. By using D-H homogeneous
transformation matrices direct volumetric error can be evaluated for multi-axis
machine et al. [9]. An automatic NC code converting software was developed so that
the developed system could be applied to practical machining for CNC machining
[1]. The online error compensation method by using back-propagation neural
network was proposed, Chana et al. [14]. Software developed for error correction
has been successfully demonstrated in machine tool laboratory for 20 years to check
its durability Christopher D. et al. [15].This paper is organized into five sections. The
first section is introduction, second section discusses the identification method of
errors, third section discusses the overview of different errors and fourth section
discusses the model for volumetric error and finally volumetric compensation
techniques.
3.1 GEOMETRIC ERROR: Geometric errors are regarded as the machine tool
errors, which exist under cold start conditions caused by mechanical-geometric
imperfection, misalignment of the machine tool elements cause them. They all
changes gradually due to wear. They demonstrate themselves as position and
orientation errors of the tool with respect to the work piece. On assumption that the
CNC milling machine consists of rigid bodies, six degrees of freedom must be
specified for each of the three carriages (tool post, bed and column movements):
three translational and three rotational errors. This errors are depended on the
position single carriage and they do not depend on the position of the other carriages
(rigid body assumptions).
For a single carriage the translation errors are the positional errors p (ypy)
and the straightness error t {in two directions perpendicular to the moving axis of the
carriage, ytx , ytz in the figure below), the rotational errors ,r, are the pitch , yaw and
roll motion yrx, yrz , yry respectively in
fig.(1) They are thus be measured separately for each carriages. Altogether, 18
position-dependent errors and additionally, three square ness errors between the
three moving axes are to be determined thus there are total 21 geometric errors can
be determined.
For reasons of clarity, it is assumed that only one carriage here the Y-
movement, is effected by errors. Each slide linkage can be considered a rigid body
moving on a designated joint. Each linkage has its own error components. Since the
whole machine system is a chain of moving linkages. The tool position can be
obtained by multiplying linkage error transformation matrices. There is various
approaches to modeled the geometric errors like analytic geometry, vector
representation, error matrices, and homogeneous transformation matrices
assumption is that rigid body kinematics. Since but Homogeneous transformation
matrices (D-H matrices) has the potential to facilitate a simple error model
formulation for an arbitrary configuration. The basic D-H matrix relates an arbitrary
vector in frame (i) to a vector in frame (i+1).Successive application of the
homogeneous transformation matrices of neighboring links in the kinematics chain of
a multi-axis machine, one may express the position of a point in the last (tool) frame
with respect to the first (global frame) frame by the transformation. The assumption
of rigid body motions of the elements, there are link geometry related errors and link
motion related errors of first, second order even high order can be expressed. The
three translational errors (linear error and straightness errors) and the three
rotational errors (roll, pitch, and yaw) are described by a 4×4 transformation matrix
for typical carriage as Tx , Ty and Tz . Similarly squareness errors are also
represented by 4×4 transformation matrix Txz and Tyz. The three dimensional
positioning error due to xy-table and spindle movement, T, is the sum of the
positioning error due to the linear and squareness errors. The total position error,
For the 3-axis milling machine there are 21 errors components. The
geometric error model is constructed by using a rigid body model, small angle
approximation. The geometric error model Chana Raksiri et .(14) is given as:
Ρx = δxx+δxy+δxz-εzx y+εyx z +εyy z +Sxy y -Sxz z-δyy εzx-δyz εzx-δyz εzy+δzy εyx+δzz
εyx+δzz εyy+εxy εzx z+ε zxSyz z+ε zySyz z
(2)
Py = δyx+δyy-δzy εxx-δzz εxx-δzz εxy+δxy εzx-εzx Sxy y+δxz εzx- εzx Sxzz+δxz εzy-εzy Sxz z+δyz- Syz
z-εxxz-εxyz+εyy εzxz
(3)
Py =δzx+δzy+δzz+εxxy+δyy εxx-δxy εyx-δxz εyx-δxz εyy+δyz εxx+δyz εxy-ε xxεxyz-ε yxεyyz+εyx Sxyy –
εxx Sxzz+ε xzSyzz-ε xyδyzz+εyx Sxz z
(4)
Where x, y, z are nominal positions.δxx, δyy, δzz are their respective positional errors
along x,y and z directions, respectively. δzx,δzy,δxy ,δxz ,δyz, δyx are straightness errors,
where the first subscript refers to error direction and the second refers to moving
direction. εxx, εxy, εxz ,εyy, εyz, εyx, εzz, εzx, εzy are the angular errors, where the first
subscript refer to axis of the rotation error, and the second refers to moving
direction. Sxy, Sxz, Syz are squareness errors between each pair of axes.
Thermal error that occurs due to continuous usage of a machine tool .When
errors due to the increase in the temperature of the machine elements need to
appraised, only those thermal deformations that lead to a relative displacement at
the cutting point and thus have an influence on the accuracy of the work being
produced, are considered. The effect of the temperature in the change in the shape
of the machine components may be determined by measuring the geometric/
kinematics behavior whereby the temperature distribution over the whole machine is
a parameter. this are generated by environmental temperature changes ( effected by
heating and cooling influence of the room, the effect of people, thermal memory from
any previous environment , and heating and cooling provided by the cooling
system ), local sources of heat from drive motors , friction in bearings , gear trains,
and other transmission devices and heat generated by the cutting process. They
cause expansion, contraction and deformation of the machine tool structure and
generate positional errors between the cutting tool and workpiece. The machine tool
elements particularly affected by self-generated thermal distortion are spindles and
ball
Δl = α L .ΔT (5)
With α being the linear coefficient of thermal expansion, L, the length of the
body and ΔT, the temperature variation from the reference state. The second order
depends on various factors like linear expansion coefficient α, temperature, slope
and the component of the temperature gradient effective in the respective projection
plane.
The dynamic stiffness of all the components of the machine tool (namely the
bed, column etc.) that are within the force-flux flow of machine is responsible for
error caused as a result of cutting action .as a result of the forces, the position of the
tool tip with respect to workpiece varies on account of the distortion of the various
elements of the machine. Depending on the stiffness of the structure under the
particular cutting conditions, the accuracy of the machine tool would vary. Thus for a
machine with a given stiffness a heavy cut would generally produce more inaccurate
components than a light cut. Most of the current error compensation research has
not considered the error generated due to cutting forces. The argument that has
been used to neglect cutting force induced errors is that in finish machining, the
cutting force is small and the resultant deflection could be neglected. Modern
machine techniques involve the machining of hardened steel directly to its final form
without the customary grinding operations. In such cases, the cutting forces could be
very large thereby making it impossible to neglect the generated forces. Force
sensors, play a major role in the elimination of cutting force induced errors.
Piezoelectric sensors or strain gauges are used for this purpose. A cutting force
sensor is developed and applied to measure the cutting forces. The result of
machining error due to cutting force is measured by a camera. Cutting force is
responsible for elastic strain on the machine tool structure. In the case of turning
center, these are usually mounted in the spindle assembly. Once this sensor is
mounted, they need to be calibrated in order to record the forces properly. HTMs
(homogeneous transformation matrices) are used to combine all the error
components and thus drive the error synthesis model.
In case where the work piece is restrained by a small area of contact with the
fixture, the error due to deformation at contact region or lift-off/slip of the workpiece
could cause significant errors. workpiece displacement is dependent on several
factors like position of the fixturing elements , clamping sequence , clamping
intensity , type of contact surface etc. Thus workpiece displacement could be a
significant source of machine errors. if the workpiece is insufficiently restrained or if
the fixture is weak in comparison with the cutting force , slip or deformation ,
respectively , are bound to occur at the fixture-workpiece interface. Thus proper
design of fixture is required. In the setup used, the workpiece should be placed in
contact with the locators.
Other errors like tool wear and load induced error. There are three types of
force present during machining process (1) workpiece weight (2) forces resulting
from cutting process and (3) gravity forces resulting from the displacement of
masses of the machine components. They all cause elastic strain on the machine
tool structure.
4.6 COMBINED POSITIONAL ERROR AT THE TIP OF THE MACHINE TOOL TIP:
There is mathematical formula for volumetric error given below. If E X , EY
and EZ are the volumetric error compensation components in the X, Y and Z
directions, respectively. The resultant volumetric error can be determined by the
following equation et al. [1]:
There are on line as well as off line error compensation techniques with on-
line compensation, all geometric errors are measured and compensated in real time.
This concept makes it unnecessary to measure temperature and separately correct
for temperature induced errors. There is a multiple-degree-of-freedom laser system
(MDFLS) for the simultaneous measurement of several machine kinematics errors.
Similarly recursive software was developed by assuming structure is non rigid shown
in fig (2) by Shih-Ming Wang [12]. In addition an automatic NC code converting
software was also developed so that the developed system could be applied to
practical machining for CNC multi-axis machines et al.[1] .Application of above said
method show that the average of machining error is improved from -273 to -8
micrometer. thus a significant improvement in the accuracy of the machine tool can
observed as a result of the compensation. A PC - based compensation controller (as
shown in fig. 3) was used for real time error compensation.
The error correction vector R PCorrection with respect to refence coordinate frame can
be obtained from the folling matrix equation [1]:
R
PCorrection = R Ptool - R Pwork (7)
CONCLUTION:
The paper reviews that error compensation is a powerful and economical way
to upgrade the accuracy of Multi-Axis machine tools. Obtaining such improvement
requires a correct geometric model, a correct thermal model, cutting force induced
error model, fixture dependent error model, tool wear dependent error model and
careful machine calibration. We found particular attention is still requiring for
squareness and angle errors, tool wear dependent error and thermal behavior. If all
errors could be found accurately we can modify the CNC G-Code command for
obtaining accurate products.
REFERENCES
1. A.C. Okafor ,Derivation of machine tool error models and error compensation
procedure for three axes vertical machining center using rigid body kinematics,
International journal of machine tools and manufacturer 40 (2000) 1199-1213.
2. Chana Raskin , Manukid Parnichkun, Geometric and force errors
compensation in a 3-axis CNC milling machine, International journal of
machine tools and manufacturer 44 (2004) 1283-1294.
3. Christopher D. Minze , Durability evalution of software error correction on
machining center , International journal of machine tools and manufacturer 40
(2000) 1527-1534.
4. Christopher D. Mize , Durability evalution of software error correction on
machining center , International journal of machine tools and manufacturer 40
(2000) 1527-1534.
5. Guiquan chen , Jingxia Yuan , A displacement measurement approach for
machine geometric error assessment, International journal of machine tools
and manufacturer 41 (2001) 149-161.
6. John A. Boach , Coordinate measuring machines and systems, Giddings &
Lewis Dayton , Ohio, Marcel DEKKER , Inc. Newyark pp 279-299.1991.
7. K.F. Eman ,B.T. Wu, A generalized geometric error model for multi –axis
machines, Annals of the CIRP Vol. 36/1/1987.
8. Mahbubur Rahman , Jouko.Heikkala, Modeling , Measurement and error
compensation of multi –axis machine tools , Part 1 : Theory , International
journal of machine tools and manufacturer 40 (2000) 1535-1546.
9. P.D.Lin and Kornel F. Ehmann , Direct volumetric error evaluation for multi-axis
machines, International journal of machine tools and manufacturers 33(1993)
675-693.
10. R. Ramesh ,M.A.Mannan , A.N-Poo , Error compensation in machine tools- A
review part 1. geometric , cutting force induced and fixture- dependent errors.
International journal of machine tools and manufacturer 40 (2000) 1210-1256.
11. R. Ramesh ,M.A.Mannan , A.N-Poo , Error compensation in machine tools- A
review part 2. thermal errors. International journal of machine tools and
manufacturer 40 (2000) 1257-1284.
12. shih-Ming Wang , Kornel F. Ehmann , Measurement methods for the position
errors of a multi –axis machine. Part 1, Principles and sensitivity analysis,
International journal of machine tools and manufacturer 39 (1999) 951-964.
13. shih-Ming Wang , Kornel F. Ehmann , Measurement methods for the position
errors of a multi–axis machine. Part 2, applications and experimental results,
International journal of machine tools and manufacturer 39 (1999) 1485-1505.
Positioning error ypy 14. Shih-Ming Wang , Yuan-
Liang Liu, An efficient
or δyy
Z Y error compensation
Straightness error t system for CNC multi-axis
machines , International
a) horizontal ytx or journal of machine tools
δxy X and manufacturer 42
(2002) 1235-1245.
b) vertical ytz or 15. V.S.B. Kiridena, P.M.
δzy Ferreira, Computational
approaches to
compensating quasi-static
errors of three-axis
machining centers,
International journal of
rotational machine tools and
manufacturer ,vol. 34,No.
errors r 1, pp. 127-145 , 1994.
rotation about
Identif
a) moving axis yry y
desire
or εyy cutter
b) horizontal axis
yrx or εxy
c) vertical axis yrz
or εzy
Z Y
Squareness
Errors w
a) plane XY xwy
or Syx X Fig 2. Concept of the software
compensation scheme.
b) plane XZ xwz
or Szx
c) plane YZ ywz
or Szy
Load
Start
NC
codes
CNC Recursiv
controller e
Rewrite software
Inverse NC codes
ANN MODEL
Kinematic compens Data Bank
kinemati Model
ation
cs
A/D Board Q/D Board Digital I/O
system
x,,y,Cutter Δx Δy Δz
z Positio Delta 20 T
Machine servo n
CNC controller
End
system Encoder
feed
back
signal
Thermocouples
Scheme
C.S.Verma
Abstract
Fire Works became the symbol of happiness in all festival and happy
occasions. Many Fancy Fire works are now days became quite common in all
wedding functions which in turn provide good global market to the fire works
industries round the year. Almost 35 percentage of global demand is satisfied by
Indian fire work industries through export.
Manufacturing Fire works require high level of safe environment which may affect
the productivity and make the operation costly. Fire Work mixtures are more
sensitive to friction, shocks, impact, sunshine, moisture and electricity. Workers are
involved directly with such explosive chemicals in preparing the fireworks. Their
safety is the most important key factor to be considered while producing fire works. A
good plant layout and safe environment has to be provided to the workers which may
in turn increase the production cost.
INTRODUCTION
Quality
PRODUCTIVITY TRIANGLE
When due consideration is given for safety, it automatically promotes
productivity. Safety, not only refers to fire protection, but also involves in
Project Objective:
The goal of this project is to collect information, assess the overall material
flow, noting the constraints and incorporating the safety measures to obtain the
following:
1. Minimize the Material Flow for both Raw Material and Work-in-Progress.
2. Eliminate the Constraints in material flow.
3. Improve Material Handling Systems
4. And Ensure high level of safety in all aspects economically.
Constraints:
A survey of existing operation is done to assess the current process flow for
work-in-process and raw material flow. Future needs are identified to assess the
space and equipment requirements. These information are gathered through
observation and direct interview with the employees.
• The Information gathered and physical building constraints are added with
safety legal issues.
• The one way distance trips for work in process and raw materials flow among
all departments are analysed.
• This analysis improves the part flow of the largest one – way distance trips
thro’ the departments and potential areas of improvements using optional
handling equipment.
• Based on this analysis, the compounding area is suggested to be reallocated,
reducing the distance moves by 30% for all production lines.
• Improving the efficiency of access to the packaging area reduces congestion
and crossings of work – in – process.
Facilities layout suggestions:
After the completion of this initial effort, the company employees are involved
in a review to analyse and identify the impact of these changes in the current
methods for all the operating procedures of the manufacturing departments. Flow
diagrams are used as a basis for understanding these operating procedures.
Material handling equipments are recommended to improve the efficiency of the
material flows from the production areas.
Point – of - Use:
Major improvements:
1. The material handling was restricted to one movement from receiving dock to
storage; and from finished goods to stocking place.
2. The material handling for receiving and shipping is decentralized to avoid
congestion and unnecessary moves.
3. Floor locations are identified and painted on the shop floor to avoid
unnecessary moves and improve visibility.
CONCLUSION:
• Safety oriented production flow has more benefits. It represents important
opportunities of improvement in the organization.
• Simulation tools can be adopted for assessment of changes in production
flow. This gives a substantial reduction in cycle time.
Also, this will allow a greater inventory control with less investment and
cost reductions in material handling with minimum need for quality
control.
References:
1. Fire woks safety: Proceedings of the National Seminar held on July 18th &
19th, 1999.
2. Fireworks Safety Manual, B&C Associates, 1991, U.S.A.
************
Abstract
Key words: Dense gas dispersion, Wind speed profile, downwind impact distance.
1. INTRODUCTION
Dangerous materials and in particular toxic gases such as ammonia, chlorine
are often used in industry. Therefore it is necessary to pay particular attention to
these compounds in order to improve the safety of plants, storage and transportation
of such products. This paper deals with a possible leakage from a container and the
subsequent dispersion of chlorine. Chlorine is frequently used as a basic raw
material especially to avoid algal formation in ICW sump.
2. MODEL DESCRIPTION
When a gas that is heavier than air is released, it initially behaves very
differently from a neutrally buoyant gas. The heavy gas will first "slump," or sink,
because it is heavier than the surrounding air. As the gas cloud moves downwind,
gravity makes it spread; this can cause some of the vapour to travel upwind of its
release point (Figure 3.1). Farther downwind, as the cloud becomes more diluted
and its density approaches that of air, it begins behaving like a neutrally buoyant
gas. This takes place when the concentration of heavy gas in the surrounding air
drops below about 1 percent (10,000 parts per million). For many small releases, this
will occur in the first few yards (meters). For large releases, this may happen much
further downwind.
The heavy gas dispersion calculations that are used in ALOHA are based on
those used in the DEGADIS model (Spicer and Havens 1989), one of several well-
known heavy gas models. This model was selected because of its general
acceptance and the extensive testing that was carried out by its authors.
A gas that has a molecular weight greater than that of air (the average
molecular weight of air is about 29 kilograms per kilo mole) will form a heavy gas
cloud if enough gas is released. Gases that are lighter than air at room temperature,
but that are stored in a cryogenic (low temperature) state, can also form heavy gas
clouds. If the density of a gas cloud is substantially greater than the density of the air
(the density of air is about 1.1 kilograms per cubic meter), ALOHA considers the gas
to be heavy.
3. METEOROLOGICAL AND TOPOGRAPHICAL MEASUREMENTS
Wind velocity
Wind direction
October-march North-East
April-September South-West
Longitude 78 º 0ˈ 78 º 5ˈ E
Shape - cylinder
Diameter - 780mm
Length - 2080mm
Pressure - 10bar
WC - 768kg
TW - 653kg
Copper tube - 10mm
2. Coordinator
3. Government liaisons
Contacting and reporting information to related governmental agencies
Contacting the department of toxic response center to request safety and
health equipment for other departments to use to control the upset situation
4. Rescue team
Protecting the staff, dealing with the toxic materials, stopping the leaks,
repairing damage, and controlling fires.
Requesting and getting the necessary resources for executing emergency
rescues.
5. Information Team
7. Medical team
7. CONCLUSION
The role of vertical variation of wind speed within atmospheric boundary layer
on the extension of vulnerable zone in the downwind direction with various surface
characteristics has been studied utilizing specialized software, viz., ALOHA
developed by EPA. A failure scenario of a chlorine tonner having 950 kg of liquid
chlorine has been considered. The surface characteristics corresponding to
roughness parameter and atmospheric stability conditions with varying surface wind
speeds have been taken into account in finding the extension of impact distances
traversed by the chlorine vapour cloud in the downwind direction. This result has an
important implication that the extent of vulnerable zone with respect to downwind
direction.
While chlorine gas is leaking, the staff must lessen the degree of hazard
classified as AL in limit time by this ERP. All of the ERTs also need to comply with
the designated responsibilities for accidents after the ERP has been initiated by the
incident commander.
References
3. Modelling and control of the dispersion of hazardous heavy gases. Faisal. I. Khan,
S.A. Abbasi
BSTRACT
1. Introduction:
In recent years a great deal of effort has been devoted to understanding how
accidents happen in industries. It is now generally accepted that most accidents
result from human error. It would be easy to conclude that these human errors
indicate carelessness or incompetence on the job but that would not be accurate.
Investigators are finding that the human is only the last link in a chain that leads
Allow the maintenance work to be carried out only after verifying the work
permit-operation
Allow the maintenance work to be carried out only after verifying the work
permit-maintenance
Incase the validity of work permit is be extended , get approval from safety
dept- maintenance
Inform fire Brigade station, safety dept, plant control and HOD about
emergency condition.
Plant in charge has to clear that NCR within the allocated time period.
1.4 ACCIDENT INCIDENT REPORTING
These are two main items of legislation relating to accident reporting and
investigation. These are: the reporting of injuries, diseases and dangerous
occurrences Regulations 1995(RIDDOR), and the management of health and
safety at work regulations 1999.
Before starting a work at height, the entire safety requirement (like safety
belts, productive helmets and safety nets) shall be decided as per the need of the
area/site by the executing agency in collaboration with the safety officer and the
contractor. These shall be documented.
The contractor’s scope of work shall include, but not be limited to execution of
work /contract, adequate safety arrangement for men, machines and materials,
etc, engaged during the execution of contract.
Before starting work, a safe work procedures/ protocol shall be prepared and
signed jointly by the executing department, representative of safety department
and the contractors or his representative. This procedure/protocol shall be
prepared by breaking the whole job into small elements and listing them
separately in the sequence. Against these elements, the agency responsible for
doing it would be mentioned. Any other details about these elements may also be
mentioned in the remarks column.
2. SOFTWARE USED
Technology : J2EE
Front-End : JSP
Frame Work : STRUTS
DBMS : MySql
2.1TESTING METHOD
3. SIMILAR OUTPUT
3.3 Analysis:
3.4 Recommendations:
Input such as employee details, nature of injury, work permit system, accident
data, incident data and contractor data is given to the software and output taken
in printable format; data can be stored and retrieve at any time.
6. Conclusion:
7. Acknowledgement:
8. References:
a
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi
b
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.
1. Introduction:
Hindustan Petroleum Corporation Limited is a LPG loading and unloading unit, which
fills the LPG in the Bottles (Domestic Cylinders). Loading and Unloading of LPG in
the domestic cylinders is a hazardous process. Therefore, the employees working
for HPCL requires a good skills, knowledge and attitude in the safety measures. The
employees who lack in safety measures should be identified and improved with
proper training.
The basic purpose of the project is to study and measure the knowledge level
using parameters like Need analysis, Entry Behavior analysis and Job safety
analysis. In addition, to improve safety, quality and productivity, which lead to a
better working environment, a study on fire safety training program, gathers its
importance.
2. Objectives:
2.1 To identify the target population who are in need of Training
Program on Industrial Safety.
2.3 To identify the hazards in the jobs and to evolve safe practices for the
selected jobs.
2.5 To tests the knowledge, skills and attitudes of the workers by the way
of pre-assessment and post-assessment questionnaire.
This objective is achieved by giving a set of questions to the
employees based on parameters like Fire protection systems, Gas,
Earthing and Personal protective equipments etc.,
2.6 To identify the topics to be covered in the training courses.
2.8 To design methods, strategies and lesson plans for all the topics to be
covered in the training courses.
4. Task analysis:
Task analysis is the process of breaking down or analyzing the task into
smaller and more detailed constituent units and of then sequencing these units of
analysis in an order of priority based on their importance in the learning.
Job analysis
Topic analysis
Skill analysis
5.1 Methods:
Questionnaire
Tests
Aims and objectives analysis is one of the most significant of all the steps
involved in the systematic design of training program.
Based on design system sources like need analysis, task analysis, aims and
objectives analysis the content is designed.
It includes,
Media selection
Selection procedure
Trainer-centered strategy
It includes,
Where the design system and the training are not working well.
Where the design system and the training are working adequately.
Where the design system and the training are working well.
13. Conclusion:
14. Suggestions:
15. Limitations:
16. Acknowledgement:
References:
ABSTRACT
The Objective of this paper is to make the task easy to see and to create a
good visual environment by careful planning of the brightness and colour pattern
with in both work area and surrounding .It includes screening the unwanted area by
illumination survey analysis and measuring and comparison with various standards
like IS-3646/1966 and NBC-2005.Then controlling the direct and reflected glare from
light sources to eliminate the visual discomfort by calculation of various parameters
like adaptation luminance, veiling luminance, solid angle. The obtained value is
compared with glare-index study. Based on the comparison, design the number of
light fitting required for glare free work environment.
1. Introduction:
The Glare index for any installation may be derived from the basic formula, but
the procedure is lengthy.
Based on the calculation we get the value of glare study index. The following
method also used to find out the glare index system.
Table I
Shielding Angle Glare Limit Lamp
Luminance Cd/m2 B D E Fluorescent lamp.
L ≤ 2.104 10º 0 0 HP discharge lamp
2.104 < L ≤ 50.104 15º 5º 0º LP Sodium lamp
L > 50.104 30º 15º 0º HP Discharge clear
Luminance limits for luminaries critical angles, γ are 45º < γ < 85º.
2.4 Calculation of Colour Temperature:
One aspect of good lighting is the prudent use of electrical energy. The
lighting
Industry has a long record of continuous improvement in the efficiency of lamps,
control gear and luminaries. When lamp types are being selected for a new
installation, the following are the principal characteristics which should be taken into
consideration:
Step-1: Decide the required luminance on work plane, the type of lamp and
luminaries:
The ratio of luminous flux received on the work plane to the total luminous flux
emitted by the source
This is the proportion of light that is reflected collectively by all the surfaces in
a room.
It accounts for light directly from the luminaries as well as light reflected off the
room surfaces. it is possible to determine the utilizations Factor for different light
fittings if the reflectance of both the walls and ceiling is known. For twin tube fixture,
utilization factor is 0.66, corresponding to room index of 2.5.
65 40 12 28 29
85 72 85 50 57
Recommended value for the above luminaries is 1.5. If the actual ratio is more
than the recommended values, the uniformity of lighting will be less.
3. Conclusion:
Based on the survey results we just found out the areas which is insufficient
and over illumination.
Calculation of veiling luminance and adaptation luminance to find out the glare
index study. Based on the study we just compare the maximum allowable glare
index study as per illumination standards. In the glare areas we just design the
repositioning of lamp to give a comfort visual environment. Then further research is
going on to design of light fitting required.
4. Acknowledgement:
5. Reference:
ABSTRACT
This paper deals with the reduction of noise levels in cement industry
machines using engineering control .It is advisable and better to consider noise
control measures at the design stage but if this is not done other engineering
controls such as barriers and enclosures must be adapted to reduce the noise level.
The noise level in various equipments like crusher, cement mill, kiln, coal mill,
vertical roller mill, etc and the absorptive effect of the present materials around each
equipment are measured. The amounts of sound absorption by various materials are
studied based on their absorption coefficient and a barrier is designed and
implemented with a suitable material to reduce the noise level and the noise
reduction ratio is calculated.
c. The difference between the levels in above two steps gives the noise level to
be reduced.
In this paper, I design a noise control system in the cement industry. This
includes various steps to complete the paper. We can see one by one.
• Areas where employees are likely to be exposed to harmful levels of noise and
personal dosimeter may be needed,
• Machines and equipment which generate harmful levels of noise,
b. Rotary kiln
c. Cement mill
d. Coal mill
f. DG set area
Noise level refers to the level of sound. This is usually done with a sound level meter
(SLM)
The microphone detects the small air pressure variations associated with
sound and changes them into electrical signals. These signals are then processed
by the electronic circuitry of the instrument. The readout displays the sound level in
decibels.
The current International standard for sound level meter performance is IEC
61672:2003. Based on the survey, we are preparing a noise survey report. This
report includes:
Where
Lex,8 is the equivalent sound exposure level in 8 hours,
∑ is the sum of the values in the enclosed expression for all activities from i = 1 to
i = n,
i is a discrete activity of a worker exposed to a sound level,
ti is the duration in hours of i,
SPLi is the sound level of i in dBA,
n is the total number of discrete activities in the worker’s total workday.
4. Survey is compared with various standards like Factories act/ Noise
regulations rules-2000, OSHA:
Based on the comparison statement, we finalise the various noise control zones
then we precede the further research.
Duration in hr Sound level dB(A)
8 90
6 92
4 95
3 97
2 100
1 102
¼ or less 115
A = S1 α1 + S2 α2 +... + Sn αn = ∑ Si αi
Where
The mean absorption coefficient for the room can be expressed as:
am = A / S
Where
6. Acknowledgement
Author is grateful to the management, principal and HOD, department of
mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for their
constant encouragement for offering facilities to carry out this research work.
7. Conclusion
The noise levels at the noise producing areas are measured using noise
level metre and the employee noise exposure is calculated. This is compared with
the standards like OSHA, Factories act. The absorption coefficient of the room where
exceeding noise level exists is calculated. The project is in progress and wants to
complete the remaining steps.
8. References.
1. Cyril M,Harris “Handbook of noise control” second edition, PP-5-1,7-11
2. John M.Handy “Noise control for industry” industrial acoustical company USA, PP
2-7
3. Noise Figure Measurement Accuracy – The Y-Factor Method, agilant
technologies, PP 5-8
ABSTRACT
Emotional Intelligence refers to capacity for recognizing our own actions and
those of others and motivating us and managing emotions. It also defined as “the
ability to monitor one's own and others' feelings and emotions, to discriminate among
them and to use this information to guide one's thinking and actions. Research
evidence reveals that is application to Industrial workers and their possible
effectiveness in enhancing their performance is yet unknown. Frames of Mind (EI):
The Theory of Multiple Intelligences introduced the idea of multiple Intelligence which
included both Interpersonal intelligence (the capacity to understand the intentions,
motivations and desires of other people) and Intrapersonal intelligence (the capacity
to understand oneself, to appreciate one's feelings, fears and motivations).
1. INTRODUCTION:
2. PROBLEM DEFINITION:
(i) Major numbers of accidents occurs due to unsafe Human acts and
Behaviour.
(ii) Most probably unsafe behaviours are noticeable at any given point
of time.
More challenge.
Work accidents constitute an extremely serious problem in our society, given the
important Psychological, health, social, economical and organizational
consequences associated with them (International Labour Organization, 2003). This
problem is reinforced by statistics, which reveal worrying numbers. Recent world
data, from 2001 (International Labour Office, 2005), indicates the occurrence of 268
million non-fatal and 351,500 fatal work accidents; in Europe the latest estimates, of
the year 2003, allude to around 4.2 million work accidents resulting in more than 3
days of absence from work (EUROSTAT, 2005).
NOTE: 90% of above said accidents are due to unsafe act & behaviour.
Hand cut
Bone fracture
Burn Injuries
3.3 LIST OF WORKMEN:
INDUSTRY:
is used to identify and list all the factors that are conditioning the problem
at hand
2) The process is called Fishbone Analysis because of the way in which the
Of a fish
3) Usually in the Mobilize stages of the process to identify scale and scope
SPSS (originally, Statistical Package for the Social Sciences) was released in its
first version in 1968 after being founded by Norman Nie and C. Hadlai Hull. Nie was
then a political science postgraduate at Stanford University, and now Research
Professor in the Department of Political Science at Stanford and Professor Emeritus
of Political Science at the University of Chicago. SPSS is among the most widely
used programs for statistical analysis in social science. It is used by market
researchers, health researchers, survey companies, government, education
researchers, marketing organizations and others. The original SPSS manual (Nie,
Bent & Hull, 1970) has been described as 'Sociology's most influential book'. In
addition to statistical analysis, data management (case selection, file reshaping,
creating derived data) and data documentation (a metadata dictionary is stored with
the data) are features of the base software.
Choices:
1. Never
2. Often
3. Most of time
4. Rarely
Observed Values:
Question 1 Question 2 Total
No 96 148 244
Null Hypothesis:
Alternate Hypothesis:
Expected values
Yes 78 78 156
X2 = ∑
(Chi-Square)
78 78 122 122
= 28.4
X2 0.05 = 3.841
X2 0.01 = 6.635
Result:
Q1 Q2 Q3 Q4 Total
Choice 3 8 28 4 8 48 (V3)
ANOVA Table
= 18173/2 = 2.808
= 9086.5
= 29119 / 9
= 3235.4
Result:
All the above stated things must be included with their regular technical training
sessions
8.0 CONCLUSION:
LITERATURE REFERENCE:
• Douglas M.wiegand”Exploring the role of emotional intelligence in behavior-
based safety coaching”, Journel of safety research, volume 16 July
2007,pages 391-928
• Dr.H.L.Kaila ”Behavior based safety in organizations” Industrial safety
chronicle volume Dec 2006,pages 83-88
• Pedroso Goncalves “ Impact of work accidents experience on casual
attributions and work behavior” Journel of safety science , volume Nov
2007,pages 992-1001
• www.eiconsortium.org
• Psychology of teaching and learning by P.Felvia shanthi
• Richard C.Bell “Factorial Validity of emotional intelligence” journal of individual
differences Feb 07,pages 487-500
• Harald ”Testing and validating the trait EI questionnaire” journal of individual
differences Feb08,pages 1-6
DESIGN OF SAFE PYROTECHNIC COMPOSITION TO CONTROL SO2 EMISSION
OF CRACKERS
ABSTRACT
1. Introduction:
In recent years concern for air pollution effects both on short term end long term has
increased therefore, one of the most unusual sources of pollution in atmosphere is
the displacement of fireworks to celebrate festivals worldwide as well as specific
events. the burning of fireworks is a huge source of gaseous pollutants such as
ozone,sulphurdioxide and nitrogen oxides as well as suspended particles. the
aerosol particles emitted by fireworks are generally composed of
metals(pottasium,magnisum,barrium,copper),the complex nature of particles emitted
during fireworks may cause adverse health effects.
3. Apparatus used
Initially the cracker was placed around the Noise level meter. The noise level meter
is placed 4 metres away from the cracker. The direction of noise level meter is such
as North South East West. Before preceding the above procedure, the noise level
meter has been set. Then the crackers are fired by me simultaneously the noise
level is also found.
East:
West
For finding the composition of gases, the vacuum pump was placed to collect the
gases when the crackers are fired. The procedure is followed:
Initial setup was done for collect the gases of crackers. Make sure that the
following component is readily available:
Electric Heater
Hood (Made of Steel)
Transparent tubes
Balloons
4.1 Procedure:
Initially one steel plate was placed on the Electric Heater then heater is heated. The
chemical composition (1gram) of the cracker is placed on the steel plate. Hood is
placed on the steel plate as well this is has to be covered the chemical composition
of cracker. One end of the transparent tubes is connected with the hood then the
other end is connected with Vacuum pump. Another transparent tube is taken then
connected with vacuum pump because of collection of the gases when the cracker’s
chemical compositions are fired. Finally the flue gases are collected by the balloons.
5.1 Principle
Apparatus
5.2 Procedure:
Initial requirement:
1 litre of water
Iodine (13grams)
Starch (2 ml)
Initially all the apparatus are cleaned. 100 ml distilled water is poured in gas wash
bottle. 2 ml of starch is added in the 100 ml of distilled water. Then distilled water
and starch are getting mixed. 10ml of N/10 iodine solution is added with the mixture.
The collected flue gases are sent to the gas wash bottle which is having mixture of
Iodine solution. Initially it has the colour of thick blue. The colour is getting colour-
less when the flue gases are added. Now measure the volume of water collected in
the gas wash bottle.
5.3 Calculation
= 9.3%
= 2.65 ppm.
6. Conclusion
7. Acknowledgement
References
1 The impact of fireworks on airborne particles, by Roberta Vecchi.
2 The Combustion Reactions of a Pyrotechnic White Smoke Composition,
Jarvis.
3 Ghosh K.N (1987) ‘The Principles Of Fireworks’.
4 Indian explosives Act 1884 by Vijay Malik.
EXPLOSIVITY TESTING OF HIGH ENERGY CHEMICALS
P.Karlmarxa, Azhagurajanb
a
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.
b
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi.
ABSTRACT
The Mechanical and thermal sensitivity of pyrotechnic compositions consisting
of mixtures of potassium nitrate (KNO3 ), sulphur (S) and charcoal (C) were found by
varying different compositions of KNO3, S, C and changing the fuels and oxidizers.
This indicates that all the compositions were found to be sensitive. Impact
sensitiveness of pyrotechnic compositions is analyzed using equipment similar to
BAM (fall hammer) equipment. Results indicate that an increase in the sulphur
content of the mixture raises its sensitivity to impact. The limiting impact energy falls
in the range of 11 to 12 Joules for the compositions studied.
1. Introduction:
Pyrothenic mixtures are en energetic compounds susceptible to
explosive degradations on ignition, impact and friction. Several accidents have been
reported in Indian fireworks manufacturing units during processing, storage and
transportation. An analysis of accident data recorded during the past ten years in
Tamilnadu in India has shown that the main causes are in adequate knowledge of
the thermal, mechanical and electrostatic sensitiveness of fireworks mixtures.
Most fireworks mixtures consist of an oxidizer, a fuel, a colour
enhancing chemical and a binder. The chemicals employed and their compositions
vary depending upon the type of fireworks being produced. The fireworks
effectiveness depends not only on the mixture composition, but also on the factors
such as particle size, moisture content, packing density and purity of the chemicals.
As per the Indian explosives act, 1884, the use of chlorate and sulphur
mixtures is prohibited due to its ease of ignition and sensitiveness to undergo
explosive decompositions. Alternate mixtures have been widely used in the fireworks
industry.
But still accidents occur, and the main reason is the poor understanding of the
explosive nature and lack of mechanical and thermal sensitivity data for mixtures
containing nitrate and sulphur compounds. In the past researchers have studied the
thermal stability and mechanical sensitivity of sulphur and chlorate mixtures.
However, the impact sensitivity of mixtures containing potassium nitrate (KNO3),
sulphur (S), charcoal (C) has not yet been reported.
The present study has multiple objectives; the first is the classification of
mixture. The other objectives are: to study the impact sensitiveness of mixtures
containing KNO3, S, and Al using the statistical tool mixture design.
3. Experimental
3.1 Materials
The chemicals used for the preparation of the gunpowder were obtained
from fireworks manufacturing company situated in the southern state of Tamilnadu,
India. The purity and assay of the chemicals were: KNO3-91.6%, S-99.84%, and C-
99.71%. The chemicals were passed through a 100-mesh brass sieve. The samples
were stored in an airtight container and kept away from light and moisture.
The diagram of the equipment used in this study for friction sensitiveness
measurement is shown in the figure 1. The friction sensitivity was determined using
friction tester by the common test methods of BAM friction apparatus. To set the
friction tester into starting position, turn the hand wheel on the top of the motor in
such a way that the two marks at the side of the table and base are lined up. When
pressing the start button, the table has to move on time backwards and than one
time forward.
After setting the machine in the start position, a porcelain plate is placed
into the holding assembly with the "sponge marks" in the opposite
direction of motion.
Lo
ad the bar with
a weight, and
push the start
button no 9.
The table with the porcelain plate moves, to and fro, over a
distance of about 10 mm with a speed of 141 r/min.
When starting a test with unknown materials, a weight is chosen approximately in the
middle of the loading range and the test is started. If two reactions are detected, then
the load is decreased. If no reactions occur, and then the load is in- creased.
3.2.2 Impact sensitivity testerThe diagram of the equipment used in this study for
impact sensitiveness measurement is shown in figure 2. The design and principle of
the FixedPlate
SupportingColumn
GuideRod
SolenoidControlled
releasingdevice
SlidingPlate
AC230Volt
ClampingScrew
DropWeight
A
TopAnvil
LocatingRing
SparkSensor
Sample
LED
BottomPlate
BottomAnvil
Half-Sectional Front View
The dropping weight was controlled remotely. On triggering the remote, the
will fall on the sample through the guides fixed to the column so that the weight
dropped directly on the striking head of anvil without rebound and distortion. Ignition
of the mixture was observed using an optical sensor. The impact sensitiveness was
measured in terms of the limiting impact energy (LIE) and calculated using equation
1.
Where
LIE - limiting impact energy in joules (j)
m - weight of the drop mass in kilograms (kg)
g - acceleration due to gravity (9.81 m/s)
h - fall height in meters (m)
Drying Condition:
Humidity - 69%
Temperature – 39◦C
Time - 1hr
1 75 10 15 With drying 0 0
2 75 10 15 Without 0 0
drying
Impact sensitivity testing results for 2 different compositions under wet and
dry condition, shows that they are impact sensitive and limiting impact energy (LIE)
was in range of 10.4 to 13.73 J. It was observed that the impact energy varied when
any one of the component concentrations of the mixture was changed. This
behaviour was due to the sensitivity and reactivity of each component. Varying the
quantity of potassium nitrate in the reaction mixture had only a minimal effect on
impact sensitivity. However, increasing the concentration of sulphur had a marked
influence on impact sensitivity.
6. Acknowledgements
References
7 Jeya Rajendran and T.L Thanulingam “A new formula for environment friendly
high energy pyrotechnic mixture”.
ABSTRACT
and so many researches are carried out to improve the heat transfer rates.
Traditional coolants like water, ethylene glycol, engine oil and acetone have poor
this project to improve the heat transfer rate of conventional fluids the nano particles
are suspended in the base fluid. In this project two mediums are selected namely
water and ethylene glycol. To improve the heat transfer rate of the above two
mediums water and ethylene glycol the nano particles of aluminium oxide is chosen.
We have considered the problem of forced convection flow of fluid inside a uniformly
heated tube that is submitted to a constant and uniform heat flux at the wall. The
heat transfer co-efficient was analyzed for the category of Reynolds number 10000,
20000, 30000. Finally we have prove that the heat transfer co-efficient of
conventional fluids like water and ethylene glycol shows better result when mixes
INTRODUCTION
conventional heat transfer fluids such as water, Ethylene glycol, Engine oil and
Acetone have poor heat transfer properties compared to those of most fluids. In spite
transfer capabilities have suffered a major lacking as a result and important need
skills to develop new strategies in order to improve the effective heat transfer
behaviors of conventional heat transfer fluids. To improve the heat transfer rate of
conventional fluids the nano particles are suspended in the base fluid. In this context
the two fluid medium are selected namely Water and Ethylene glycol. To enhance
the heat transfer rate of the above said medium the nano particles of Aluminium
• Chemically stable
In order to analyze the heat transfer rate of the base fluid and nano fluid CFD
software namely FLUENT is used. This project mainly deals with the heat transfer
coefficient analysis of the fluid medium under the following options.
PAK AND CHO studied the heat transfer performance of Al 2O3 and
TiO2 nano particles suspended in water and expressed that convective heat transfer
coefficient is 12% smaller than that of pure water at 3% volume fraction
PROBLEM IDENTIFICATION:
The test section chosen for that work is a straight brass tube of inner
diameter of 10mm and length of 800mm and the section was subjected to uniform
heat flux of 3.5KW. Using GAMBIT software creates the test section. Then it is
exported to the FLUENT software for analyze the heat transfer rate. The heat
transfer rates are analyzed for different Reynolds number of the category 10000,
20000 and 30000. The results are obtained in CFD software (FLUENT). FLUENT is
(CFD) program, used to simulate fluid flow in a variety of applications. The ANSYS
CFX product allows engineers to test systems in a virtual environment. The scalable
program has been applied to the simulation of water flowing past ship hulls, gas
____________________________
form the nano fluid. Then the nano fluid is allowed to passes through a uniformly
heated pipe (brass tube) and the heat transfer rate was analyzed for different
The addition of small particles to the fluid can some times provide heat
transfer enhancement. However the works in this area provide the suspension of
micro to macro size particles bear the following major disadvantages.
• The particles settle rapidly, forming layer on the surface and reducing the heat
transfer capacity of the fluid.
• The large size of the particles tends to clog the flow channels particularly if the
cooling channels are narrow.
than a one tenth of a micrometer in at least one dimension. Despite the fact that
some authors restricting their size to as low as 1 to ~30 nm, a logical definition would
object that behaves as a whole unit in terms of its transport and properties. It is
further classified according to size: In terms of diameter, fine particles cover a range
between 100 and 2500 nanometers, while ultrafine particles, on the other hand, are
are sized between 1 and 100 nanometers, though the size limitation can be
materials.
• High mobility
used to make deposits on surfaces rather than new structures. In this way it
resembles chemical vapour deposition except that the species involved are ionized.
As surface deposit the nonmaterial’s can be as little as a few atoms in depth. It is not
a nonmaterial’s unless at least one dimension of the bulk particles of the surface
deposit is of nanometer scale. If this is not true it is a thin film and not nano
materials. Each particle must be anodized and independent other than interacting by
formation. Solutions are clear because molecules of nanometers size are dispersed
and move around randomly. In colloids the molecules are much larger and range
molecules in a solvent.
Plasma is an ionized gas. Plasma is achieved by making
gas conduct electricity by providing a potential difference across the two electrodes
so that the gas yields up its electrons and that ionizes. A typical plasma arcing
device consists of two electrodes. An arc passes from one electrode to the other.
The first electrode vaporizes as electrons are taken from it by the potential
difference. To make carbon nano tubes carbon electrodes are used. Atomic carbon
actions are produced these positively charged ions pass to the other electrode, pick
up electrons and are deposited to form nanotubes. An object to be coated is allowed
to stand in the presence of the chemical vapour. The first layer of molecules or
atoms deposited may or may not react with the surface. However these first formed
depositional species can act as a template on which materials are often aligned
because the way in which atoms and molecules are deposited is influenced by their
neighbors. During deposition a site for crystallization may form in the depositional
axis so that aligned structures grow vertically.
10000 6892.77
20000 7739.3
30000 8303.12
10000 7048.02
20000 7887.27
30000 8440.58
CONCLUSION
The convective heat transfer features of water and nano fluid in a
tube were analyzed. The suspended nano particles remarkably enhance the heat
transfer process and the nano fluids has large heat transfer coefficient than the
original base fluids under the same Reynolds number. The heat transfer feature of a
nanofluid increases with the volume fraction of nano particles. Thus we conclude that
enhancing the heat transfer properties of the conventional fluids that have poor heat
transfer properties by using the nano particles by mixing it with the base fluids.
REFERENCES
3. Pak.b. and cho.y.i. “Hydro dynamic and heat transfer study of dispersed fluids
with sub micro metallic oxide particle”. Experimental heat transfer vol.11 pp
151-170, 1998
5. Xuan and rotezel.w. “Conceptions for heat transfer correlation of nano fluids”,
international journal of heat and mass transfer vol: 43 pp 3701-3707, 2000
ABSTRACT
Measurement of Breathing zone fume generation and individual particulate
concentration generated from base metal and weld electrode becomes vital to take
appropriate measures to eliminate them at source. In order to take control measures
most significant parameters should be identified such as current, voltage, diameter
of electrode, stick out distance and welding speed for ER70S6 MIG wire and E6013
SMAW welding elctrode were taken for this study. Further the study was focused to
assess Breathing zone concentration by a personal air sampler and individual
particulate concentration by Inductively Coupled Plasma Analyser. The statistical
modelling using ANOVA was developed to determine the plume dispersion within
the environment with respect to various input parameters.
1. Introduction:
Welding is one of the most widely used metal fabrication methods. Of all the
welding processes, manual metal arc welding and metal inert gas welding account
for 60-70 percent of welding activities in the industry. Workers are exposed to
emissions such as fumes and gases arising out during welding process unless it is
effectively controlled. Fumes consists of individual constituents like chromium,
magnesium, nickel, etc., that may result in respiratory disorders which include
bronchitis, airway irritation, lung function changes and a possible increase in the
incidence of lung cancer if it exceeds the threshold limit value(5mg/m3).
Welding fumes have posed a threat to health since the first coated electrodes
were introduced. The earliest cases of welders being effected by its noxious fumes,
when operators were found to exhibit signs of pneumoconiosis.
2. Experimental
• Sampling pump
• Cassette
• Filter paper
• Tube connections
Power supply : 6V
Variance in the input parameters can be determined with the help of Statistical
modelling using ANOVA. So that it is identified that depending upon which input
parameter the fume concentration may exceed the Occupational Exposure Limit.
4. Results and Discussion
An extensive set of tests were conducted over a wide range of conditions for
MIG welding wire ER-70S6 using Semi-mechanized welding station and SMAW
welding electrode E6013 and the results have been tabulated. And conducting a
comparative study with the results of manual welding process shows the difference
in the concentration of fumes. This can be attributed to the fact the temperature of
the plume has been reduced considerably due to its interaction with the environment.
Statistical modelling using ANOVA also provides a clear view of plume concentration
within the working environment. These data would be useful for designing an
efficient ventilation system.
5. Conclusion
Experimental result has led to the conclusion about the variation of response
parameters in terms of independent parameters within the specified range. Voltage,
Current, Welding speed and dimension of the welding rod are the most significant
factors for all responses.
For this assessment it can be observed that the TLV of fumes is almost
reaching the standard value. If the workers are exposed in such a condition for a
long period of time they may get occupational diseases. However the study
considers the worst case condition (no mechanical ventilation) and in open
atmosphere this model will be in effective.
6. Acknowledgement
References:
10 J.Norrish, G.Slater, P.Cooper “Particulate Fume Plume Distribution and
Breathing Zone Exposure in Gas Metal Arc Welding”. University of
Wollongong, New South Wales, 2522, Australia
The shock absorber is mostly used in the automobiles for absorbing shocks
application. The basic construction of the shock absorber is spring and damper
combination which will absorb the shock while the vehicle is running. During the
working of the shock absorber has down the linear motion is to convert the electrical
energy. A linear generator according to the present invention is adapted to generate
a voltage proportional to the speed of a movable permanent magnet. The magnet is
surrounded by a copper wire coil. As the magnet moves back and forth through the
coil an electric current is automatically generated. One of the advantages of this
approach is that the current is produced directly without the need of a generator. An
electromagnetic analysis has been performed to analyze the overall generator
design.
Keywords: linear generator, electromagnetic, finite element model in Quick field.
1. Introduction
Shocks absorbers are used to damp oscillations by absorbing the energy contained
in the springs or torsion bars when the wheels of an automobile move up and down.
Conventional shock absorbers do not support vehicle weight. They reduce the
dynamic wheel-load variations and prevent the wheels from lifting off the road
surface except on extremely rough surfaces and making possible much more precise
Steering and braking. The shock absorbers turn the kinetic energy of suspension
motion into electrical energy. Linear generators have lately been suggested as
suitable energy converters in shock absorber. A linear generator it is possible to
couple the motion of the shock absorber directly to the generator. The generator
consists of a stator; copper coils and a linear translator; which carries a different
polarity of permanent magnets is juxtaposition to the shaft through the movable
member linear translator rod. The permanent magnets are arranged in a manner to
tightly contact with each other and the polarity of each adjacent permanent magnet is
opposite to each other. The particular magnet is chosen for this application because
this magnet should have highest magnetic Property for produce the current.
Although this requirement satisfies the generator design, maximum operating
temperature for the permanent magnet should be observed to maintain its physical,
mechanical and magnetic properties.
2. Design of Permanent Magnet Linear Synchronous Generator
Permanent magnet can be described by its B-H curve which usually has a
wide hysteresis loop (Fig. 4.1.). For permanent magnets, the essential part of the B-
H curve is the second quadrant, called the demagnetization curve. There are two
significant points on this curve: one at H = 0 , where the magnetic flux density is
equal to Br (remanent magnetic flux density, or remanence), another point c H at
point B = 0 , where a reverse magnetic field intensity is applied to a magnetized
permanent magnet (coercive force, or coercitivity).
Demagnetization curves for different permanent magnet materials
The saturation magnetic flux density sat B corresponds to high values of magnetic
field intensity when an increase in the applied magnetic field produces no further
significant effect on the magnetic flux density. Maximum magnetic energy per unit
produced by a PM is the maximum energy density per volume:
Based on the cost, a second generation of rare earth has been discovered with
neodymium Nd and iron. The Nd is much more abundant than Sm . Nd − Fe − B
magnets have better properties than 5 SmCo , but unfortunately the disadvantage is
that their demagnetization curves depend on the temperature, and are also
susceptible to corrosion. Protection of Nd − Fe − B magnets for metallic ( Sn or Ni )
or organic (electro-painting) is the best method of protection against corrosion
The material recommended for the model in this project is the Nd − Fe − B since it
considerably improves the performance-to-cost ratio. Ferrites are not used because
they would increase the size of the shock absorbers, and 5 SmCo would increase
the cost.
If B is the magnetic flux density, the total magnetic flux φ can be expressed as
Where the integral is over an area A . So, if the magnetic flux density through the
Transversal section of a core is uniform:
Where: φ - core flux
B - Flux density in the core
A - Transversal section area of a core
The relation between the magnetomotive force (mmf), and the magnetic field
Intensity for magnetic circuits is given by
Due to the primary part slots, the magnet flux experiences an increase of the real
air-gap. Thus the new equivalent air-gap
Referring to the simplified equivalent circuit of the generator shown in Fig. 4.9.,
The output voltage:
a) Core losses, due to the change of magnetic field; these losses take place in the
stator steel, and they consist of the hysteresis losses and the losses due to the eddy
currents.
b) Copper losses; they are resistive losses in the coil windings
c) Mechanical losses due to friction and ventilation.
The copper (resistive) losses, which are the only losses considered in this
Application, appear in the conductor with the electrical resistance T Rφ carrying a
current I:
The output power
A computer program (steady.m) was written using Quick field in the form of m-file.
The
Force-speed characteristics of the electric shock absorber obtained for the
parameters
Indicated are shown in
The steady computer program plots the efficiency-speed characteristic for two
different values of the source voltage (V 12 and 24 V ) S = . The efficiency-speed
characteristics of the electric shock absorber obtained for the parameters indicated
are shown in
Efficiency-speed characteristics
Relative speed of the generator with the modified circuit
6. CONCLUSION
The shock absorber has been designed and analyzed to use in tow wheelers.
It consists of a permanent magnet linear synchronous generator, a spring, and an
electric accumulator. The electric accumulator consists of a controlled rectifier and a
battery, and it was not evaluated in the present project. In the design calculations,
the dimensions and performance parameters of the currently used mechanical shock
absorbers were used as the reference. For this purpose, these shock absorbers
were
described first.
The results obtained from the dynamic simulation of the electric shock
absorber with the modified output electric circuit show that the oscillations attenuate
to zero after disturbance appears. Therefore, the electric shock absorber works
properly under the modified circuit.
7. REFERENCES
[1] Reimpell, J., Stoll, H., and Betzler, J., “The automotive Chassis, Engineering
Principles”, Second Edition, 2001, pp. 347-385
[2] Crouse, W. and Anglin, D., “Automotive: Chassis and Body,” Fifth Edition,
McGraw-Hill Book Company, 1976, pp. 48-54
[3] Mendrela, E. and Drzewoski, R., “Electric Shock Absorber for Electric
Vehicles,” Conference, Proc. of BASSIN’ 2000, Lodz, Poland 2000.
[4] Gieras, J. and Wing, M. “Permanent Magnet Motor Technology, Design and
applications,” Second Edition, Eastern Hemisphere Distribution, 2002. pp. 51-52
[6] Boldea, I. and Nasar, S., “Linear Electric Actuators and Generators,” Cambridge
University Press, 1977, pp. 46
[9] Danielson, O., “Design of a Linear Generator for Wave Energy Plant,” Master
Degree Project, Uppsala University School of Engineering, UPTEC F03 003,
January 2003
Image segmentation of still and real video signals is an important initial task
for higher level image processing such as object recognition or object tracking. The
hardware realization is an important task for achieving very high speed
segmentation, in the order of tens of microseconds for a color image and hundreds
of nanoseconds for a gray image. So, if a hard ware architecture which can segment
both color and gray images is found, it will be very helpful for image segmentation
field.
The aim of this paper is realization of a digital algorithm for gray scale/color
image segmentation. The implemented algorithm is adoptable for both gray scale
and color image segmentations. So, only some slight modifications are needed to
perform both gray and color image segmentations by using the same chip. Since
only a preprocessing unit is differed for both architecture, the time difference for
segmentation of gray and color images are significantly reduced.
CELLULAR NEURAL NETWORKS
Similar to neural networks CNN are also a parallel computing paradigm with the
difference that communication is allowed between neighbouring units only.
Applications like image processing, analyzing 3D surfaces, solving partial differential
equations and other sensory-motor organs are included.
CNN processors are a system of a finite, fixed number, fixed location, fixed topology
which is locally interconnected.
Back propagation algorithms tend to be faster but genetic algorithms are useful
because they provide a mechanism to find a solution in a discontinuous, noisy
search space.
Quality plays an inevitable role not only in manufacturing industries but also
in healthcare industries (Hospitals), retail business, banking and all service-based
jobs. In this paper, factors to improve the quality in healthcare industries are
presented. Interface matrices are created and ranking has been given for each
factors. Priority is given for Life saving activities in the ranking. Pie chart is created
using interface matrices. Minimization of human and technical errors in surgical
equipments is selected as high priority using Interface matrices. As per Association
of periOperative Registered Nurses AORN, Electro surgery generator is considered
as high-risk equipment. In this project, Electrosurgery process and its equipments
are taken for quality control activities in health care industry. Electrosurgery is the
application of a high-frequency electric current to biological tissue as a means to
cut, coagulate, desiccate, or fulgurate tissue. Electro surgery generator and its
processes are studied in detail and the possibilities of errors are found out. Severity,
Occurrence, and its impact in patient life for each error are tabulated. Corrective
actions are recommended to minimize the errors.
Keywords: Quality Control Improvement; FMEA; Electro surgery Generator;
1. INTRODUCTION
Quality Control [11] is the ongoing effort to maintain the integrity of a process to
maintain the reliability of achieving an outcome. As a process performance
improvement methodology, QC is viewed today as a disciplined, systematic,
measurement-based and data-driven approach to reduce process variation. There
are many methods for quality control. These cover product improvement, process
control and people based improvement. The following methods of quality
management and techniques that incorporate and drive quality control improvement
—ISO 9004:2000, ISO 15504-4: 2005 ,QFD, Kaizen, Zero Defect Program, Six
Sigma — 6σ, PDCA , Quality circle, Taguchi methods , Toyota Production System ,
Lean Manufacturing, Kansei Engineering, Six sigma combines established methods
such as Statistical Process Control, Design of Experiments and FMEA in an overall
framework.Quality Improvement (QI) as a powerful business strategy has been
around for almost twenty years and has grown exponentially in healthcare industry
during the past five years. In manufacturing, it is quite possible to reduce or even
eliminate (in some cases) most of human variability through automation. In
healthcare industry, the delivery of patient care is largely a human process, and
hence the causes of variability are often difficult to identify and quantify. In this
project, factors necessary for the quality improvement in healthcare industry are
presented. Pareto chart is used to find out the critical factor which affects human life
directly. Mininimization of human errors and technical errors in surgery is found to be
most critical one. Here, Electrosurgery process is taken for quality improvement.
Electrosurgery is the application of a high-frequency electric current to biological
tissue as a means to cut, coagulate, desiccate, or fulgurate tissue. Electrosurgery is
performed using an Electrosurgical Generator (also referred to as Power Supply or
Waveform Generator) and a hand piece including one or several electrodes,
sometimes referred to as an RF Knife. FMEA technique [12] is used for quality
improvement in Electrosurgery process especially related to Electrosurgery
generator.
Life
Sl Saving
N Basic Economi Activit Criticality
o Factors Needs c Comfort y in %
3 at ER 3 7 7 0 20
13 1
12 2
11
3
10
4 - More
Critical
9
Zone
8
5
7
6
From the Pie chart, it is clearly known that minimizing the errors in surgical equipments
is most important one where quality improvement is required to save the patient life.
Surgical instruments today span a wide range of devices - from the "low tech"
end of simple sharp knife, to the "high tech" end of nanosecond pulsed surgical laser
systems. With the advent of High Energy surgical devices now available - such as
electrosurgery, cavitational ultrasonic aspirators, harmonic (ultrasonic) knives,
cryosurgery, various laser systems and endocoagulators - it is useful to view these
various devices simply as different means of delivering energy to tissue. Even a simple
scalpel may be viewed as delivering mechanical energy to tissue at a concentrated
pressure point (the blade edge) to incise tissue. No one particular system is inherently
better than the others are for all surgical purposes. Each may have advantages in
certain situations, and user preference is frequently only a personal bias, influenced by
past familiarity and training with a system. . As per Association of periOperative
Registered Nurses AORN , Electro surgery generator is considered as high-risk
equipment.
Electro surgery and its equipments are explained in the next section.
Electrosurgical units [3,4] utilize AC electricity but at significantly faster rates of reversal
for the polarity. ESU's utilize frequencies of around 350,000 to 500,000 times per
second, or Kilohertz (kHz). Some go up to 3 or 4 Megahertz (MHz). This extremely high
frequency does not interfere with our own biological processes to any significant
degree, so Faradic effects do not apply.
Ohm's law
Coag Mode: Voltage is the parameter enhanced when choosing the "coag" mode on an
ESU.
What is Detectability?
Rating System
Detectability (D)
Probability (P)
Severity(S)
Rating
RPN
S (risk
Sl (severit O D priority
No Failure y (occurrenc (detectio number Recommende
. mode Effects rating) Cause(s) e rating) n rating) ) d actions
Improper Instant
Level setting response
Power Stray energy Accidental slip technology
Setting for injuries over other instrument -
different results even tissues from Valley
1 tissues in dealth 4 3 3 36 Lab
EMF
interferenc 1.Effective
e with process &
other Inspection
O.R. report
equipment Poor
such as Affects Electormagnet 2.EMI
video Interpretatio ic Interference Resistant
2 systems n of data 2 shielding 3 4 24 coating
5. CONCLUSION:
Factors to improve the quality in healthcare industries were listed out and interfaces
matrices were created to find the high-risk one to save the patient life. Minimizing the
errors in surgical instruments are taken as high risk factor, which involves in life saving
activity. Electrosurgery generator was taken for the quality control process.
Electrosurgery process and its equipment were studied in detail .The failures in
electrosurgery generator were found out and rating for the severity, occurrence,
detection was assigned. Based on these, RPN (Risk Priority number) was calculated.
Causes and Action items for each failure were given. By adopting the action item for
each process, failures can be reduced to negligle level, which results in saving the
patient and surgeon life. Thus the implementation of Quality measures in health care
industries paves the way for saving the invaluable human life, economic benefit,
comfortness,etc.,
References
By the end of a decade there is a good tradition for taking stock, summarizing the main
events of the past10 years and making predictions for the next decade. To meet the ever-increasing
demands for efficiency and high, consistent analysis quality more and more production laboratories
now base their activities on automated procedures for sampling sample preparation and analysis.
There has been a clear increase in cement industry lab automation over the past decade. Important
driving factors behind the introduction of automation include fast data capture for quality-control
tasks, data management requirements, demand for high and consistent analysis quality and
company policies on projecting a high tech profile .This paper is an example of such an analysis in
the Field of laboratory automation in the cement industry.
1. INTRODUCTION
The cement industry is experiencing a boom on account of the overall growth of the
Indian economy. The demand for cement, being a derived demand, depends primarily on the
industrial activity, real estate business, construction activity, and investment in the infrastructure
sector. India is experiencing growth on all these fronts and hence the cement market is flourishing
like never before. Indian cement industry is globally competitive because the industry has witnessed
healthy trends such as cost control and continuous technology up gradation. Global rating agency,
Fitch Ratings, has commented that cement demand in India is expected to grow at 10% annually in
the medium term buoyed by housing, infrastructure and corporate capital expenditures.
The Indian cement industry is the second largest producer of quality cement, which
meets global standards. The cement industry comprises 130 large cement plants and more than
300 mini cement plants. The industry's capacity at the beginning of the year 2008-09 was 198.30
million tones.
Cement production during April to October 2008-09 was 101.04 million tonnes as
compared to 95.05 million tonnes during the same period for the year 2007-08.Despatches were
100.24 million tonnes during April to October 2008-09 whereas 94.33 million tonnes during the
same period for the year 2007-08.During April-October 2008-09, cement export was 1.46 million
tonnes as compared to 2.16 million tonnes during the same period for the year 2007-08.
To compete in the market every industry has to prove themselves through their product
for achieving
Lower cost
Quality
Zero complaints
To achieve zero complaints from the customer stinget Quality control norms to be
adopted in the process. For that now cement Industries are installing Robo lab for ensuring 100 %
Quality Parameters.
To meet the ever-increasing demands for efficiency and high, consistent analysis quality
more and more production laboratories now base their activities on automated procedures for
sampling, sample preparation and analysis. There has been a clear increase in cement industry lab
automation over the past decade. As in the steel industry, the automated central laboratory has
become the accepted industrial standard. Important driving factors behind the introduction of
automation include fast data capture for quality-control tasks, data management requirements,
demand for high and consistent analysis quality and company policies on projecting a high tech
profile. However, the cost of the laboratory operation has of course been the overall most important
single parameter.Labour cost savings are rather simple to account for in an investment justification,
but this is not the case with most of the other important potential benefits. Till 2006, Cement
Industries were adopting manual sampling and analysis process for Quality Control activities.
2. AUTOMATION CONCEPTS
Laboratories in which the sample preparation units and the analysis equipment are
automated and then linked together by conventional transport belts or the like. The automation is
provided by dedicated highly specialized equipment units.
QCX/RoboLab
A typical configuration consists of standard industrial robot placed in the centre of a circular
arrangement of sample preparation and analytical equipment. Samples normally arrive
automatically from the connected automatic sample transport system, but may also be entered via
operator sample conveyors or special input/output magazines. QCX/RoboLab offers a very high
flexibility in terms of the number and types of equipment handled by the robot. Supported, fully
automated preparation & Analysis disciplines relevant to the cement industry include powder or
fused bead preparation for X-ray analysis, particle sizing by laser or by conventional sieving, color
analysis, Carbon/Sulphur/Moisture combustion analysis, physical testing and collection of shift/daily
composites. For the typical cement lab project a throughput capacity of 10-20 samples will apply;
but higher numbers in one robot cell are achievable.
The QCX computer integrates the system components. It identifies incoming samples,
downloads the relevant sample-handling specification and controls all intelligent devices in the
configuration. Sequence control includes priority handling, intelligent handling of equipment failure
situations and much more.
QCX/RoboLab (and QCX/Auto Prep) provides high quality in sample preparation and
analysis. Quality not only meets the performance of ‘the very best lab technician’, but is highly
consistent over time. Thus, there are no fluctuations from shift to shift in analytical levels due to
small differences in the practical procedures undertaken by human operators.
A Robot is made up of two principal parts
Controller
Manipulator
We can communicate with the Robot using a teach pendant and Operator panel located on the
controller.
3. CONCLUSION
Eliminating the manual sampling by Robo lab en number of benefits are there
Consistent quality of the product, Human error in sampling is totally eliminated, Analysis could be
completed in time and timely corrective action can be taken, Standard deviation and variation
considerably reduced and reduced customer complaints. Even though the installation cost is two
high, now all the new Cement plants prefer this type of Robo Lab for achieving consistent in quality.
4. REFERENCE