Thursday, 3 July 2014

A benchmark for the flexibility and adaptability of maintenance management system

Article below was extracted from:
http://www.reliableplant.com/Read/28900/condition-monitoring-saving

Although the content is suppose to focus on the advantages of good condition monitoring, but pay attention to how maintenance parameters such as criticality, priority and frequency changes dynamically. Notice also the flexibility and adaptability of their system to cope with the dynamism. That is what I would consider a World Class Maintenance practice.

Saving Time and Money with Condition Monitoring

    
A recent acoustic emission (AE) study identified a potential critical bearing failure that became a planned preventive action for a leading food manufacturer. This also avoided considerable cost and unplanned downtime.
The acoustic emission equipment and the main tool used during the planned inspection routes were manufactured by Holroyd Instruments. This example will show the value of this type of equipment in avoiding a major unplanned event that could have had massive cost consequences for the business. Collateral damage to the associated equipment would have proved very costly, and the lead time to rebuild could have caused extensive downtime that would have meant disgruntled customers not being able to rely on stock availability.
The story began in April 2010 when some initial elevated readings were noted at two node points on a large step-down transfer gearbox that were sampled on a seven-day routine. The distress readings were elevated and triggered the alarm level. They were of concern and evident on subsequent inspections. The third elevated reading that was part of an upward trend instigated a planned work order in the computerized maintenance-management system (CMMS) to investigate and take further action. This equipment could not be taken out of service lightly, as it was at the time constrained by high production demands. Experience with a sister line’s previous planned bearing change also played an important part in the escalation of the risk.

The input side of a transfer gearbox is shown with the output bearing node point on the left-hand side.
On more detailed inspections, it was determined that the bearing with the highest distress was indeed the suspected output bearing. Audible clicks were loud and clear at the output end bearing. The two bearings at the node locations were on the drive line of the motor at the input and output ends of the transfer gearbox. The adjacent bearings on the large, helical step-down gear were still reading low and had no audible clicks. The engineering manager was advised that there was an anomaly on one of the input bearings, that the others were in good condition and that production could continue with targeted condition monitoring. The routine oil sampling was increased from monthly to every two weeks. AE inspections were increased, with spectrum readings now on a four-day cycle. This would give some comparative evidence when the new bearings were eventually fitted.
The planned change of the bearing set was arranged with the production planner, maintenance manager and product specialist. It became clear that the equipment would have to operate for at least another six months until it became available. Contingency plans were formulated for an emergency change if any of the AE readings or oil samples showed advances toward failure. Warnings were issued that this could occur rapidly if the bearing failed. A new bearing set was purchased, and a meeting was scheduled with the bearing manufacturer to examine the used bearings when they were eventually changed in early 2011.
The AE readings stayed at an elevated level during this long waiting phase, and oil sample results showed no elevated readings in the key elements associated with roller bearing failures. During the weeks before renewal, many spectra were taken from all points of the gearbox for future evaluation. This would rule out frequencies from the oil pump and other components around the assembly. An AE envelope spectrum graph before the bearing change is shown below.

As can be seen, there was something creating a spike at 73Hz, which happened to match the frequency of the bearing race. This provided a clue that there was a race surface defect of some kind and not an element breaking down or the cage disintegrating.
The bearing change finally took place, and the production plant was turned around within 12 hours so that the equipment did not incur any unplanned downtime. The used bearing set was returned with the transfer gearbox, and the two units were degreased. On first inspection, they both looked similar and in good order. The elements and cages were then dismantled from the outer and inner races, with care taken to keep them in order and in the correct aspect for reassembly later.
It became clear that on the suspect output bearing, a major spall on the inner race had developed, and every element was pitted with the debris that had emitted. At this point, a representative from the bearing manufacturer was invited to visit and examine the bearings. He concurred that the bearings had lasted very well considering the atmosphere and heat in which they had operated for almost 10 years. This would be considered an end-of-lifetime mode of failure. It may have lasted many more months or could have accelerated to failure within days or weeks. The photograph below is of the spall that measured approximately 10 mm in length and 2.5 mm wide.

Spall damage to the race is shown above with feathered edges and surface pitting in the loaded area of the spalling. Note the next layer of material on the right-hand side that would have given way.
When the remedial work had been completed, additional spectrum samples were recorded and monitored to learn more. Carpet noise levels were lower, and the decibel scale was a third of the previous graph example. The maximum peak was now less than 0.4 decibels, while the carpet level was less than 0.2 decibels.
In conclusion, the systems and tools relied on every day proved effective in capturing this anomaly before it turned into a major event. The key to this was the full involvement of engineering with operations to plan in the remedial work with as little disruption as possible.
Among the lessons learned were that the inspection frequencies at seven-day intervals were correct for this critical plant, the preventive action was started at the earliest opportunity, the equipment enabled the pinpointing of the bearing fault, the audio facility allowed a second reference that linked rpm with the audible clicks and that this all gave sufficient evidence for the planned work to commence at the earliest opportunity.
Root-cause analysis was carried out directly after the bearing change was completed to investigate any future recommendations for servicing this equipment. It was decided that as the bearings had reached their end-of-life cycle, there was no need to alter any future planned maintenance. Condition monitoring with AE had provided the confidence to pick up any anomalies at a very early stage in the curve.

Wednesday, 2 July 2014

Equipment Registry aka Master Equipment List aka Asset Register


The term used for the registry or list varies from organisation to organisation, but it basically refer to the list of all money making assets which were constructed in the process plant or facility.

To set the scene, I'm quoting from Reliable Plant article, authored by Bob Schindler,

"The equipment registry is one of the most important tools in your kit when it comes to maintenance and reliability. It can be the foundation of your planned maintenance, lubrication, training and repair programs, as well as help with regulatory compliance and safety programs.

Your spare parts management program depends upon a complete and accurate registry with the requisite analysis for regular service parts along with the insurance spares identified through failure modes and effects analysis. Don’t forget that your financials are also tied in through depreciation, amortization and cost center assignments.

Equipment history gets tied to the registry along with manuals, drawings, procedures, labor costs and reports. That is why it is the foundation upon which so much is built, and that is why it is so vital that you get it right and work to maintain its accuracy.

While it has an initial cost and a maintenance cost, the payback can be significant and continuous, so make the investment even if you have to bump something else down the list. The man-hours that you save long term will repay your investment many times over. You can consider the downtime and spares savings as icing on the cake."

I personally could not emphasize enough how important that is. It may sound very obvious that it is the most fundamental things to do is to have an Asset Register, but unfortunately, common sense is not as common as we all thought. I have been to many plants, and I have never seen an Asset Register that is 100% yet. The best would be somewhere around 98%, the worst I have personally seen would probably in the 70% mark along with poor labelling and documentation. I would not be surprised to walk into a plant without an Asset Register. Why? I have come across plant managers who doesn't know their plant's statutory requirement, and licenses required to be a plant manager.

To people initiating projects out there, please ensure your contractor provide you with a complete register. To the managements, please don't slash the cost for such thing. It will cost the organisation big money for a loooooong time.

The Fundamentals of Mineral Base Oil Refining - Lubrication Oil

Article extract from Reliable Plant newsletter:
http://www.machinerylubrication.com/Read/28960/mineral-oil-refining

    
Approximately 95 percent of the current lubricant market share is comprised of conventional (mineral-based) oils. Most people know these mineral oils are derived from a crude stock, but how much do you really know about the refining process?
The petroleum that flows from a well in the form of crude oil comes in many varieties and types, ranging from light-colored oils containing mostly small hydrocarbon molecular chains to black, nearly solid asphalt-like large hydrocarbon chains. These crude oils are very complex mixtures containing a plethora of different compounds made of hydrogen and carbon. These compounds (known as hydrocarbons) can range in size from methane (containing one carbon and four hydrogen atoms) to massive structures with 60 or more carbon atoms. This molecular size distribution can be used to our advantage.

The Importance of Refineries

Most lubricating oils come from petroleum or crude oil. In order to get a lubricating oil from a crude oil, the crude oil must be sent through a refinery. The refinery takes from the crude oil a lot of molecules of various sizes and structures that can be used for different things. For example, gasoline, diesel and kerosene are all derived from crude oil. Lubricating oil relates to hydrocarbon molecules of a particular size (in the range from 26 to 40 carbons). Fairly large and heavy molecules are needed to work as lubricating oils. The molecules that are used with gasoline and kerosene are smaller and have fewer carbons in the structure of the molecule. The refinery puts these molecules in little silos based on size and weight, and removes impurities, enabling each of the products from the crude oil to be utilized.
After the crude oil is desalted and sent through a furnace where it is heated and partially vaporized, it is sent to a fractionating column. This column operates slightly above atmospheric pressure and separates the hydrocarbons based on their boiling points, which are directly affected by their molecular size. In the fractionating column, heat is applied and concentrated at the bottom. The hydrocarbons entering the column will be vaporized. As they travel upward in the column, they will cool until they condense back into a liquid form. The point at which this condensation occurs varies again based in part on the molecular size.
93% of lubrication professionals would purchase a lubricant with a high-quality base oil at a higher initial price instead of a lubricant with a low-quality base oil at a lower initial price, according to a recent survey at machinerylubrication.com
By pulling the condensing liquid from the column at different heights, you can essentially separate the crude oil based on molecular size. The smallest of the hydrocarbons (5 to 10 carbon atoms) will rise to the very top of the column. They will be processed into products like gasoline. Condensing just before reaching the top, the compounds containing 11 to 13 carbon atoms will be processed into kerosene and jet fuel. Larger still at 14 to 25 carbon atoms in the molecular chain, diesel and gas oils are pulled out.
Those compounds with 26 to 40 carbon atoms are a tribologist’s main concern. This is the material used for the creation of lubricating oil. At the bottom of the column, the heaviest and largest of the hydrocarbons (40-plus carbon atoms) are taken and used in asphaltic-based products.
After the distillation process, the compounds need to be refined for their intended purpose. This step in the process is done to reduce the tendency of the base oil to age (oxidize) in service and also to improve the viscosity/temperature characteristics. There are two ways this can be done. The first involves a separation process where there are two products being made: a desired lube product and undesirable byproducts. The second way, which is quickly becoming the favored of the two, is a conversion process. This process involves converting undesirable molecular structures into desirable structures with the use of hydrogen, heat and pressure.

Extraction Process

The following is a simplified description of the extraction process:

Deasphalting

Propane deasphalting takes the residuum from the very bottom of the column (the heaviest, largest molecules) and separates them into two products: tar and compounds that are similar to the lube distillates but have a higher boiling point. This material is called deasphalted oil, and it will be refined in the same manner as the lube distillates.

Solvent Extraction

Solvent extraction is the term used for the removal of most of the aromatics and undesirable constituents of oil distillates by liquid extraction. Commonly used solvents contain phenol, furfural and sulphur dioxide. The resulting base stocks are raffinates (referred to as neutral oils) and an extract that is rich in aromatic content, which is highly sought after as a process oil or fuel oil.

Dewaxing

After solvent extraction, the raffinates are dewaxed to improve low-temperature fluidity. This process again produces two products: a byproduct wax that is almost completely paraffinic and a dewaxed oil that contains paraffins, naphthenes and some aromatics. This dewaxed oil becomes the base stock for many lubricants, but there is one more process that can be done to make a premium product.

Hydrofinishing

Hydrofinishing changes the polar compounds in the oil by a chemical reaction involving hydrogen. After this process, an observer would notice a lighter-colored product and an improved chemical stability. The final quality of the base oil is determined by the severity of the application of temperature and pressure in the hydrofining process.

Conversion Process

The following is a simplified description of the conversion process:

Hydrocracking

In this refining process, the distillates are subjected to a chemical reaction with hydrogen in the presence of a catalyst at high temperatures and pressures (420 degrees C and 3,000 psi). The aromatic and naphthene rings are broken, opened and joined using hydrogen to form an isoparaffin structure. The reaction with hydrogen will also aid in the removal of water, ammonia and hydrogen sulfide.

Hydrodewaxing

During hydrodewaxing, much like hydrocracking, a hydrogenation unit is used to deploy a catalyst that is specific to conveying waxy normal paraffins to more desirable isoparaffin structures.

Common Mineral Oil Molecules

Hydrotreating

Because the previous two processes involve breaking chemical bonds between two carbon atoms, it is necessary to introduce the saturation of any unsaturated molecules. This is easily done by introducing more hydrogen. These saturated molecules are more stable and will be able to resist the oxidation process better than the unsaturated variety.

There are slight differences in the characteristics of the finished base oil produced by these two processes. The main difference lies in the aromatic content. The conversion process can reduce the aromatic content to around 0.5 percent, while the extraction process lingers around 15 to 20 percent. This aromatic content has the following effects:
It would appear that the conversion process produces a better quality product, but there is always a trade-off. The cost of refining oil using the conversion process is somewhat higher than the extraction process. This extra cost incurred by the refiner is eventually passed on to the customer. However, in this case, the customer typically gets what he pays for - a higher quality base oil at a higher initial price.

About the AuthorJeremy Wright
Jeremy Wright is a Senior Technical Consultant for Noria Corporation. Hire Jeremy to develop procedures for your lubrication program or to train your team on machinery lubrication best practices. ...

Monday, 22 April 2013

Managing the use of Extreme Pressure (EP) Oil Additives

Article extract from Reliable Plant newsletter:
http://www.machinerylubrication.com/Read/28958/ep-additives-effects

The Effects of EP Additives on Gearboxes  


Oil additives offer a wide range of benefits, but in some circumstances they can actually be harmful to the machines in which they are added. For example, let’s look at worm gearboxes. These machines have gearing composed of yellow metal (typically bronze). Certain extreme pressure (EP) additives can chemically react with these softer metals, causing premature wear and even failure.
Worm gearboxes are mainly comprised of two units: the worm and the worm wheel. The worm is what actually drives the worm wheel. It is a rod with a helical ridge on its surface that allows it to mesh with the teeth of the worm wheel to provide rotary motion.

62%of lubrication professionals use extreme pressure (EP) oils to lubricate worm gears, based on a recent survey at machinerylubrication.com

These gearboxes are great for achieving high reduction ratios as well as high torque. In order to increase either of these values, the worm wheel is made larger in diameter. The larger circumference the worm wheel has, the greater the speed reduction and the greater amount of torque will be imparted through the exit shaft.
Generally, the worm is made of steel, while the worm wheel is made of a yellow metal. However, in some cases, both the worm and worm wheel are steel, or they both may be yellow metals. The worm is always harder than the wheel.
Yellow metals, as the name suggests, are yellowish in color. They are alloys that contain copper. A standard definition would be a type of brass having about 60 percent copper and 40 percent zinc. Bronze is another type of yellow metal. These metals have been used for centuries to form gears and other components of simple machines.

Copper Strip Corrosion Test

An easy way to determine which form of sulfur is being utilized in your EP oil is to look at the results of the copper strip corrosion test (ASTM D130). In this test, a strip of copper is immersed in the fluid to be tested at 40 degrees C and again at 100 degrees C. The strip is removed after each test and checked for staining of the copper. The results range from very little to no staining (1a) all the way to very dark stains (4c). If the results are in the area of 1b to 2a, then the yellow metals in your worm gearboxes could be at risk for chemical attack.

EP additives that contain sulfur cause the most damage to these types of metals. Two different types of sulfur may be used within these additives. The first type is active sulfur. Sulfur in its active state readily reacts with metal surfaces to form a ductile metal soap that is sacrificial and allows opposing surfaces to contact one another with minimal damage. Active sulfur is chemically aggressive, and with yellow metals being softer than steel, they can begin to pit and form spalls due to this chemical attack.
Rising temperatures can increase the rate at which this reaction takes place. This is explained by the Arrhenius rate rule, which states that the rate of a chemical reaction doubles for every increase of 10 degrees C (18 degrees F) in oil operation temperature.
The second type of sulfur used within EP additives is inactive sulfur. It is less likely to bond to surfaces and react chemically.
Active sulfur in some EP additives reacts with the copper within the brass or bronze. Sulfur, when in contact with copper along with the presence of heat, forms copper sulfide. This simple chemical reaction can have devastating repercussions on the reliability of machines. In extreme pressure situations, copper disulfide can be formed. Both of these crystalline forms of copper are very hard and can cause abrasive damage to soft machine surfaces.

Worm Wheel
With all the risks associated with chemical attacks on yellow metals, why make gears using these metals in the first place? Brass and bronze are easy to machine into different shapes and yet have good strength and hardness. It all comes down to economics. When you factor in machining costs and raw materials costs, yellow metals are a very cost-effective alternative to steel.

EP additives can cause damage to a
worm wheel, which is usually bronze.
In addition, brass properties are easily changed by incorporating different metals into the metallurgy. For instance, lead can be added for enhanced machinability. For increased corrosion resistance, aluminum or tin can be added into the makeup of the alloy. The possibilities are endless for the types of alloys you can make with brass and bronze.
By understanding some simple chemistry and reading the product data sheets of the lubricants you put into your gearboxes, you can increase reliability. When adding EP oil to gearboxes containing yellow metals, remember to check the copper strip corrosion test (ASTM D130) to help predict if there will be any issues with compatibility of the metallurgy within these machines.

The "micro-skill" that drives Change in Culture

A good article from Reliable Plant newsletter in regards to Change.
http://www.reliableplant.com/Read/28886/communicating-effectively-change
Communicating Effectively During Change

I have never been known to be musically inclined, but I can recognize a great song when I hear one. One of these great songs is “I Heard It through the Grapevine.” This particular song has been recorded and re-recorded numerous times over the years by many different artists, and although it may bring thoughts of animated raisins to people in my generation, it is more closely associated with Marvin Gaye.
“Oh, I heard it through the grapevine,
Oh, and I’m just about to lose my mind,
Honey, honey, yeah.”

Just as the grapevine in this song had a strong impact, the communication grapevine remains an extremely powerful medium for corporate communications. The effectiveness of grapevine communication and its use – both intentional and unintentional – should not be ignored.
When coaching clients during major change initiatives, we continuously stress the importance of effective communication. Experience has shown that clients who struggle to communicate also struggle to successfully implement major change. Even post-project analysis of very successful projects often finds the company could have communicated more frequently and effectively at some point in the project.

One of the key success factors in successfully implementing a major change initiative is creating a comprehensive change-management strategy and then integrating it into the project-management plan. Creating a communication plan is one of the most critical elements contained within this change-management strategy. In this plan you identify target audiences, determine key messages, choose the preferred sender and select the appropriate communication channel for that message.
The majority of communication channels typically chosen are formal communication channels. The communication channel that is often forgotten is the informal communication channel, and this brings us to the topic of the infamous grapevine.

So what is the grapevine, how does it work and how can you use it to effectively communicate during major change? I am sure that the grapevine is as old as time itself, but I will discuss its context within modern organizations.

Grapevine Decoded

Every organization has both an informal and formal organizational structure as well as formal and informal communications. Simply stated, the grapevine is a type of informal communication channel. It’s all about people communicating directly with other people outside official channels of communication.

Your background and experience influence how you view concepts. For example, my background in electronics and submarine nuclear power often leads me to relate concepts to equations to enhance understanding. Personal experiences over the years related to the grapevine can also be translated and simplified into an equation to help us understand how the grapevine works. The amount of communication or “chatter” on the grapevine can be explained by the following equation:
Grapevine Chatter = Information Void + WIIFM + Recent News + Insecurity


Information Void

The laws of supply and demand apply equally to grapevine chatter and economics. An information void exists when the information demanded exceeds the information supplied. The supply and the demand of the information are not defined by the organization but by the individual person who desires the information. An information void will be filled with something – either rumors or valid information. The larger the information void, the greater the amount of chatter in the grapevine.


WIIFM

What’s in it for me (WIIFM) seems to show up in many places when we are talking about organizational change. Regardless of the situation, when change occurs our natural tendency is to translate this into a WIIFM context. This is what you are listening for. How does this change affect me, my pay, my family, my free time, etc.? Whether that WIIFM is good or bad, it creates a vested interest. When people have a vested interest, they will want information. The greater the impact on WIIFM, the greater the amount of chatter on the grapevine.

Recent News

Many organizations are stunned at how breaking news hits the grapevine at breakneck speed. Even something as simple as an office remodeling (occurring in our offices right now) can generate significant grapevine chatter. The fresher the story, the greater the chatter on the grapevine.

Insecurity

The impact of the WIIFM factor is exponentially compounded by the level of insecurity that exists. The greater the amount of insecurity that exists within the organization, the greater the amount of chatter that will exist on the grapevine. For example, with the current fragile state of the economy, one can easily see how this factor can become extremely high.

Rumors

As stated earlier, an information void will be filled. When the desire for information is high and the number of facts that are known is low, the number of rumors flying is huge. Most of us have experienced this firsthand, and sometimes it is not a pretty sight. Regaining control of information in the midst of flying rumors is extremely difficult. The longer a rumor is allowed to fly, the more difficult it is to replace it with valid information. While some people try to fight rumor with rumor, the only effective way to combat rumor is with facts. When a large number of rumors exist, an even larger number of facts must be communicated to combat the rumors.

Leveraging the Grapevine

Knowing the factors that make up grapevine chatter – information voids, WIIFM, recent news and insecurity – you can proactively intervene with frequent and effective communication. Fill information voids with accurate information before rumors materialize. Proactively communicate when breaking news is expected. When information (such as impending mergers and acquisitions) is about to be communicated, be prepared and react quickly after the message is released. When communicating change initiatives, ensure that you communicate the impact of the change on the individual.

Addressing the factors associated with grapevine chatter can minimize but never totally eliminate the amount of informal communication occurring. However, by better understanding the grapevine, you can successfully leverage it as part of your overall communication strategy.
One of the tenets of a good communication strategy is evaluating the effectiveness of your communication. This is accomplished by obtaining feedback. What better way to gather feedback than to take advantage of an existing channel of communication?

Tapping into the Grapevine

Over the course of my career, I have been able to tap into the grapevine at your typical places — the water cooler (scuttlebutt in Navy terminology), the coffee pot and the smoke break area. Tapping into the grapevine is not normally achieved overnight. Grapevine communicators are a very selective bunch. They will not share all information with everyone. There must be some level of relationship and trust established, and building relationships and trust takes time. To accomplish this, you must get out of the office, talk to people and most of all listen.

But while the traditional grapevine is thought of as being a face-to-face or oral type of communication, this is no longer the case. Advances in technology and recent trends in social networking have significantly transformed the modern grapevine. Informal communication now occurs through email, texting, Twitter and on social-networking sites such as Facebook.
Implementing major change in an organization is a complex and challenging task. In the end, creating organizational change is about cumulatively creating change in individuals. Successfully leading major change requires successfully leading individuals. To successfully lead individuals through change, you must be able to communicate effectively. You must find new ways to connect to people and communicate in every imaginable way. That includes tapping into the grapevine. Without it, you just might lose your mind. Honey, honey, yeah.

About the Author
Dave Berube, a senior consultant for Life Cycle Engineering (LCE), has more than 30 years of experience in leadership and management. His expertise includes behavioral change management, project management and development, and process improvement within various types of organizations. You can reach Dave at dberube@LCE.com.

Friday, 5 April 2013

Particle Count in Oil Analysis

A good article from Reliable Plant Newsletter:
http://www.machinerylubrication.com/Read/28974/particles-friend-foe

Particles: Friend or Foe? Understanding the Value of Particles in Oil Analysis


  
In the field of tribology, the word “particles” means different things to different people. The following case studies illustrate how differently the mechanical engineer, tribologist, sampler, analyst and diagnostician interpret the presence of particles.

The Mechanical Engineer and Tribologist

To the mechanical engineer and tribologist, the presence of particles is an indication that contaminants have entered the system or that certain components are wearing abnormally. Particles that are smaller than the minimum clearances could result in abrasive wear, which in turn causes premature aging or failure. Large particles could result in blockages of oil channels, which could lead to oil starvation. Thus, both conditions spell trouble to these role players.


This illustration shows how particles cause damage
to parts in contact. (Ref. Triple-R Oil Cleaner)

The Sampler

The main concern of the sampler is to produce a homogenous sample that is representative of the bulk volume of oil in the system. The presence of particles complicates the task of the sampler, as particles tend to settle at the bottom of the tank/sump.
Prior to sampling, oil should be hot and well agitated to ensure that the sample includes particles that have settled. For routine oil analysis, the container must not be filled more than 80 percent to enable the laboratory to agitate the sample prior to analysis.
Improper sample handling includes overfilling containers, decanting samples that were originally filled to the top and sampling when the oil has not been circulated sufficiently prior to sampling. Overfilling a container leads to insufficient agitation. Shaking the container prior to decanting will result in large particles remaining at the bottom of the container. There’s also the possibility that the less contaminated portion is decanted, causing the laboratory result to be higher than usual.

 


 The Analyst

Once the samples reach the laboratory, the presence of particles directs the tasks and methods that the chemical analyst will use to analyze the samples. The method of sample preparation, the analytical techniques and instrumentation required to ensure that the results are representative of the condition existing in the application all depend on the type, size, properties and distribution of the particles present in the samples.
Various analytical techniques, including inductively coupled plasma (ICP) spectrometers, the flow cell of Fourier transform infrared (FTIR) spectrometers and some particle counters, rely on peristaltic pumps and transport systems (tubing) to introduce samples to the various instruments. When large particles are present in samples, the possibility exists that the tubing could become blocked.

68%
of machinerylubrication.com visitors view the presence of particles as a valuable indicator in an oil analysis sample

Analysts also must be aware of the tendency of particles to settle at the bottom of the container. Prior to each analysis, samples should be agitated sufficiently to ensure a homogenous state. Lowering of the fluid’s viscosity either due to fuel dilution in the engine or dilution due to analytical requirements (e.g., ICP) aggravates the tendency of particles to settle. With ICP analysis, the samples must be diluted to assist with the transportation process. Due to dilution, suspended particles are more prone to settle out on the bottom of the test tube and will not be available for analysis. However, no dilution is required with rotating disk electrode (RDE) analysis.

The Diagnostician

Particles can be of value to a diagnostician who studies the shape and nature of particles found in a sample. A scanning electron microscope (SEM) can assist in investigating the root cause of mechanical failure by allowing the diagnostician to pay special attention to evidence such as scratch marks on particles and methods of particle formation.
Fine filtration is a proactive process aimed at removing contamination and wear particles from the system. If this process is not executed with special care, knowledge and sensitivity to the value that particles add for the diagnostician in root-cause analysis, crucial evidence can be lost.

Case Study #1: RDE vs. ICP Spectrometry

In 2002 the Eskom laboratory changed from ICP to RDE spectrometry to perform wear metal analysis on used oils. To obtain a new baseline, it was essential to perform both spectrometric methods as well as the ferrous particle quantifier (PQ) on all samples received for a three-month period.
When the spectrometric results were plotted against the PQ values, it was determined that the higher the PQ value was for a sample, the greater the difference between the ICP and RDE results. For a PQ value of 15 milligrams of iron per liter (mg/l Fe), the expected difference between the two techniques was about 0 to 5 ppm. However, above a PQ value of approximately 75 mg/l Fe, the relation seemed to become non-linear, where the differences between ICP and RDE results were from 50 to more than 500 ppm.


 This graph charts the relationship between RDE and ICP relative to PQ
as determined on samples of different sources.
One sample with a PQ value of 1,712 mg/l Fe had an iron value of 699 ppm with ICP. The result on the RDE for this same sample was found to be in the region of 3,000 ppm. The difference in results obtained by the two spectrometric methods was as high as 2,300 ppm.
When the wear trends of the unit with the PQ value of 1,712 mg/l Fe were examined, the ICP results gave the impression that the problem was either resolved or stabilized. However, when the RDE results became available, it was evident that there was an increase in wear. The final report recommended the unit be shut down for emergency maintenance.
Due to the lower particle size limitation of the ICP, a plateau was reached much sooner than with the RDE. Applications most affected by the ICP’s lower size limitation were those that did not have internal oil filtration systems such as gearboxes and certain compressors.
Geometry of the particles being analyzed by the RDE also affected the results. For example, if thin flakes of metal were present in the sample, flakes that had flattened out on the RDE gave a different reading than particles that had not flattened out. Thus, the results on the RDE varied due to the particle size as well as the geometry of the particles.

Case Study #2: Severe Scratching in a Locomotive Engine

The engine of a particular locomotive was replaced with a newly refurbished engine. When the engine was installed, the maintenance team had difficulty eliminating abnormal vibration in the engine. Eventually, it was determined that a bent flywheel caused the vibration.
As soon as the vibration problem was eliminated, scratching noises were audible. Everything was checked, yet the source of this noise could not be traced. The maintenance engineer decided to involve the laboratory that performed the oil monitoring program in the investigation.
Since the engine was recently refurbished and the original source was unknown, the laboratory had no history on which to base the diagnosis. To obtain more knowledge about the solid content of the oil sample, the lab employed specialized methods, such as the electron diffraction X-ray (EDX) scan technique using the SEM.
To find out if the noise was due to insufficient lubrication, the laboratory determined the oil’s viscosity. This was to establish if metal-to-metal contact had occurred as a result of the oil being too thin. A new oil sample of the specified lubricant was submitted for comparison with the oil sample taken from the engine.
A PQ analysis was then conducted to determine the magnetic property of the oil, followed by spectrometric elemental analysis using RDE spectrometry. An EDX scan using the SEM was performed on particles caught after the sample was filtered through a 0.8-micron-filter membrane and rinsed with pentane to remove oil residue.
The results revealed that the viscosity was acceptable when compared to that of the reference sample, while the PQ values were very high (more than 1,000 mg/l Fe). The RDE spectrometric analysis indicated an increase in copper, iron and zinc when compared to that of the reference sample.
The EDX scan using the SEM found the following components on the filter:
  • High occurrence of white metal bearing material
  • Metal frets
  • Iron, lead and copper shavings with scratch marks
  • Metallic iron shaving with lead bound to it
  • Zinc particles not in combination with copper
  • Mineral/rock/soil containing calcium phosphate
  • and calcium silicate
  • Silicon and aluminum silicate
  • A piece of silicone

Ionization Energy and Spectrometric Analysis

The available ionization energy to energize large particles reaches a plateau, which is one of the reasons different spectrometric methods have limitations concerning particle size (3 microns maximum for ICP and 8 to 10 microns maximum for an RDE spectrometer).
Spectrometers, as they are applied today, are blind to large particles. Traditional methods of determining large particles (larger than 10 microns) are acid digestion (expensive and hazardous), microwave digestion (expensive and time consuming) and direct ferrography (does not include non-ferrous metals).
Rotrode filter spectroscopy (RFS) was developed to provide an improved spectroscopic method for analysis of used oils for condition monitoring/predictive maintenance without the particle size or metal-type limitations of previous combined spectrochemical and direct ferrographic techniques.


 Particles as Enemies

Special evidence, such as the scratch marks on the metal frets, suggested that uneven objects (particles) were responsible for abnormal wear of the liner and/or the crankshaft. The piece of silicone found indicated overuse of a silicone-containing substance like a sealant, which possibly was squeezed out between parts, cured and ripped off by the hot flowing oil. These silicone pieces could have blocked oil passages, resulting in a damaging situation of oil starvation.
Particles including silicon (quartz) and sand (aluminum silicate) as well as other debris discovered in the oil sample were responsible for the abnormally high wear. Since abrasive wear was the main cause of premature aging and resulted in severe damage to the parts in contact with these objects, the maintenance engineer wanted the reason for the initial ingress of those particles into the system to be investigated.
For the sampler, it was essential to ensure that as much evidence as possible was captured in the drawn sample. In this case, where the ultimate failure would have been catastrophic, the task could have been quite difficult, since all particles had settled to the bottom as the oil cooled. Thus, a typical sample drawn in the normal fashion may not have allowed all the evidence to be captured.

Particles as Friends

By unlocking the treasure of evidence that was captured in the particles found in the oil, the diagnostician obtained information about the formation of such particles. The presence of metal shavings indicated possible misalignment. Lack of lubrication also was detected, which possibly was due to blocked oil channels resulting from the presence of foreign particles. The metallic iron shaving with lead bound to it suggested welding due to oil starvation (metal-to-metal contact).
The discovery of a particle with scratch marks led to an investigation of objects that could have been responsible for the damage. One possible culprit was detected in a particle consisting of calcium phosphate and calcium silicate. This specific mineral (possibly apatite) together with particles containing quartz and sand led to the conclusion that the engine originated from a locomotive that was involved in an accident with subsequent derailment where soil was introduced to the engine. Evidently, the soil was not removed successfully when the engine was refurbished.


An iron shaving with scratch marks (top) and soil (above) were found in the oil sample.

Case Study #3: Wrist Pin Bearing Failure on a Diesel Locomotive

Prior to a wrist pin bearing failure, oil samples from a diesel locomotive were sent to two different laboratories for routine oil analysis. The first laboratory issued wear alerts on possible wrist pin bearing wear four weeks prior to the failure, while the second laboratory indicated no abnormal wear was taking place. A resample was taken, and again the second lab did not find any abnormal wear, while the first lab issued another wear alert.
The fleet owner decided to stop the locomotive to find out whether the alerts issued by the first laboratory were justified. It was discovered that the wrist pin bearing had failed with damage to four power packs. An investigation was launched to determine the root cause that resulted in the different diagnoses from the two laboratories.
Routine oil monitoring tests were performed, including spectrometric analysis using RDE spectrometry and PQ. An EDX analysis using the SEM on the filter debris was conducted after the sample was filtered through a 0.8-micron-filter membrane and rinsed with pentane to remove oil residue. The results of the RDE spectrometric analysis revealed an increase in silver, copper and iron, while the SEM analysis confirmed the presence of particles larger than 10 microns.
Since both laboratories performed similar analysis on a routine basis, the investigation focused on the differences in the techniques used by the two labs. The only major difference found was that the laboratories employed different spectrometric techniques to determine the wear metal content of the samples, namely ICP and RDE spectrometry.



These images of a locomotive engine
reveal wrist pin bearing failure.

The primary variation between the two techniques is the way the sample is introduced to the system. For ICP analysis, the sample is diluted prior to introduction to the instrument. Therefore, it’s possible that the particles settled prior to analysis. The ICP also uses a peristaltic pump and transport system, which is subject to blockages.
In addition, the size limitation of the ICP is 1 to 3 microns, while the range of the RDE is 8 to 10 microns. The SEM analysis confirmed the presence of particles larger than 5 microns, so it seems the failure progressed beyond the point where the ICP could detect the wear particles but remained within the range of the RDE.

Case Study #4: Scored Liner and Piston Wear on a Diesel Locomotive

As part of an oil analysis program, the crankcase oil of a locomotive was monitored on a monthly basis. However, no samples were received for the period between January and the end of June. The engine failed at the end of September.
The reason for concern was that all laboratory reports returned with no indication of an increase in wear metal content. An investigation was initiated to explain why the laboratory tests failed to detect any increase in wear when it was evident that abnormal wear was taking place from the mechanical failure that occurred.
Since no abnormalities were found except for fuel dilution over a prolonged period, the investigation focused on sampling intervals and techniques that could have affected the results.
Routine oil monitoring tests, including spectrometric analysis using RDE spectrometry, were performed, as well as EDX analysis using the SEM on the filter debris after the sample was filtered through a 0.8-micron-filter membrane and rinsed with pentane to remove oil residue.
The results showed severe fuel dilution. The RDE spectrometry indicated no increase in metal content since the previous sample was analyzed. The EDX analysis revealed that isolated large particles (larger than 20 microns) of heavy metals and other inorganic oxides were present on the filter. Many of the larger particles were iron or iron oxides. The small particles consisted mainly of calcium sulphate.
 

These photos of a locomotive engine indicate a severely scored liner and piston wear.

Lowering of the fluid’s viscosity, which may have resulted from fuel dilution in the engine, aggravated the tendency of particles to settle. Therefore, it is possible that suspended particles had settled to the bottom of the sump and were not included in the sample.
In the earlier stages of failure, smaller particles were produced (likely during the period when no samples were submitted). As the failure progressed, the size of the particles increased. Since particles larger than 10 microns were found, it is possible that the failure progressed beyond the point where the RDE could detect the wear particles. Thus, severe fuel dilution over a prolonged period of time combined with not submitting oil samples at the initial stages of failure resulted in the inability to detect the failure through a routine oil analysis program.

A particle larger than 20 microns
was found in the oil sample.

In conclusion, it is apparent that removal of particles from a system prior to sampling by means of indiscriminate filtration, improper sample handling and settling of particles can result in the loss of important evidence that could lead to the early detection of possible failures or assist in root-cause analysis.
Remember, the purpose of oil analysis is to avoid failure before it happens. Sensitivity with regards to particle sizes and size limitations of analytical techniques relative to sampling intervals is vital to reach this ultimate goal. In the end, the success of an oil analysis program to detect possible failure modes relies on the ability of the mechanical engineer, tribologist, sampler, analyst and diagnostician to treat and react to the presence of particles in the appropriate manner.

Wednesday, 3 April 2013

ISO Oil Cleanliness vs Operating Pressure

An informative article I read on Reliable Plant Newsletter this morning.
http://www.machinerylubrication.com/Read/28977/consider-contamination-control

Consider Contamination Control Before Buying Hydraulic Equipment

  

These days, best-practice contamination control is more like an accepted pre-condition for reliability. Given contemporary advances in technology for excluding and removing contaminants, it could be said that failure to control contamination is a failure of machine design rather than a failure of maintenance.
That said, effective contamination control is not something to be taken for granted. The results you get are only as good as those you demand, which is why it never hurts to be reminded of the reliability benefits of kicking fluid cleanliness up a notch. Consider the following case study:
A sugar mill was operating a fleet of more than 20 sugar cane harvesters. The typical fluid cleanliness of the hydrostatic transmission for the ground drive on these machines was ISO 22/20, and they were suffering regular pump failures - three pumps per machine, per season, on average.
The sugar mill contracted a local hydraulic engineering firm to investigate the recurring pump failures. They recommended a specification change to the ground-drive hydraulic motors and an upgrade of the filtration.
One machine was modified as a prototype, and after showing promising results, two more machines were modified in the first season. The ISO cleanliness code on the three modified machines was 18/15 or better.

71%of machinerylubrication.com visitors consider contamination control targets before purchasing new equipment

By the fourth year, 15 machines had been modified. The mill was now changing out one variable piston pump per machine every three seasons - a nine-fold increase in pump life.
Armed with this data, the sugar mill convinced the cane-harvester manufacturer to incorporate the same transmission and hydraulic filtration design at the factory.
This is not a scientific study into the benefits of improving fluid cleanliness alone, because clearly, other changes were made to the hydraulic circuit in addition to upgrading the filtration. We’re also not told what influence (if any) these modifications had on other important operating parameters such as pressure and temperature.

Example of Hydraulic Fluid Cleanliness Targets





















But what can’t be disputed is the drastic improvement in pump life. As a result, the equipment end user demanded that the machine manufacturer improve the specification (and initial cost) of the equipment they were purchasing. Of course, this was after the economic benefits of doing so had been clearly demonstrated to the end user.
For this hydraulic equipment owner, it was a case of “I once was blind, but now I see.” Prior to this education, they likely would have looked at two cane harvesters of similar capacity from competing manufacturers and bought the cheapest one - with little or no regard to machine reliability or life-of-machine operating costs.

Factors in Setting Target Cleanliness Levels

There are two important factors for hydraulic systems that can help you set target cleanliness levels. One is how sensitive the components are to contaminants. This is called contaminant tolerance.
The second factor is pressure. There is a disproportionate relationship between pressure and contaminant sensitivity. Basically, the greater the pressure, the far greater the contaminant sensitivity the components have to contamination.
After you have considered the component type and the pressure, also consider the duty-cycle severity, the machine criticality, the fluid type and safety concerns. All of these factors collectively can be used to set target cleanliness levels in hydraulic systems.

Even though they got it the wrong way around, this machine owner got it in the end. If you’re a hydraulic equipment buyer/owner, the key takeaway of all of this is that the best time to consider these issues is before you purchase a piece of equipment.
By starting with the end in mind, you get the maintenance and reliability outcomes you desire - before the machine even gets delivered. Like in the cane harvester example, you specify the contamination control targets you want to achieve based on your reliability objectives for the piece of equipment and instruct the manufacturer to deliver the machine appropriately equipped to achieve these targets.
Based on the weight and viscosity index of the hydraulic oil you plan to use, you determine the minimum viscosity and therefore the maximum temperature at which you want the machine to run. You then instruct the manufacturer to deliver the machine equipped with the necessary cooling capacity based on the typical ambient temperatures at your location, rather than accepting hydraulic system operating temperatures dictated by the machine’s one-size-fits-all designed cooling capacity - as is the norm.
For example, say you are about to purchase a 25-ton hydraulic excavator that is fitted with brand “X” hydraulic pumps and motors. According to the pump manufacturer, optimum performance and service life will be achieved by maintaining oil viscosity in the range of 25 to 36 centistokes. You also know that in your particular location that you expect to use an ISO VG 68 weight hydraulic oil, and the brand of oil you are already buying has a viscosity index of 100.
This being the case, the pump manufacturer tells you, based on the viscosity and viscosity index of the oil you plan to use, that if your new excavator runs hotter than 70 degrees C, the performance and service life of the pumps and motors will be less than optimum. Not only that, with 70 degrees C as the maximum operating temperature, the oil, seals, hoses and almost every lubricated component in the hydraulic system will last longer.
So being the sophisticated hydraulic equipment user that you are, you say to the manufacturer before you order the machine: “I expect ambient temperatures at my location as high as 45 degrees C, and under normal conditions (i.e., no abnormal heat load in the system), I require this machine to run no hotter than 70 degrees C. If you deliver it to the site and it runs hotter than 70 degrees on a 45-degree day, then I’ll expect you to correct the problem - at your cost.”
You could continue by specifying other requirements that have an impact on hydraulic component reliability, such as that all hydraulic pumps have a flooded inlet, that no depth filters or screens be installed on pump intake lines and that no depth filters be installed on piston pump and motor case drain lines.
At the very least, as the cane harvester story demonstrates, the next time you or the company you work for are purchasing hydraulic equipment, be sure to define your fluid cleanliness and operating temperature/viscosity targets in advance and make them an integral part of your equipment selection process.

About the Author
Brendan Casey
Brendan Casey has more than 20 years experience in the maintenance, repair and overhaul of mobile and industrial hydraulic equipment. For more information on reducing the operating cost and ... Read More