The Application of Simplified Process Statistical Variance Techniques to Enhance the Detection of Filtration Integrity Loss

Feb. 15, 2005

About the author: Mike Sadar is with the Hach Company, Loveland, CO and can be reached at [email protected]

Laser-based particle detection technologies, such as laser turbidity and particle counting are often utilized to monitor filtration processes. In addition to the raw measurement values, these technologies can often provide additional information regarding the performance of the filtration process. The baselines of filtrate or permeate product from a filtration system can be more highly characterized and the variability of the baseline itself can be quantified through the use of simplified statistical techniques. The approach of baseline variability monitoring can be applied during filtration performance monitoring with the variability treated as a separate parameter. In the event of a fine integrity loss, the variability of the filtrate baseline will demonstrate a significant change relative to the baseline. This change is observed prior to the detection of the same particle event using conventional detection techniques.

The purpose of this presentation is to demonstrate how the baseline variability parameter can be applied to monitoring filtration processes for integrity loss. The basic approach utilizes a process, laser-based light scatter technology for monitoring the product leaving a filtration system. Research of this application is being performed to demonstrate the correlation between baseline variability and integrity loss within conventional dual media filtration, ultra-filtration, micro-filtration, and reverse osmosis filtration.

Preliminary results indicate that the application of baseline variability parameter can enhance the sensitivity for detecting a fine integrity loss beyond those basic detection methodologies that are typically applied. The sensitivity of this method is promising, with an improvement of between 1-2 orders of magnitude over the raw measurement values. This application is in the process of test on both membrane and conventional filtration systems in the drinking water industry.

This approach may add sensitivity to those detection methods that apply similar technologies and the application could be easily expanded into many different industries that utilize similar filtration practices.

Keywords: Process monitoring, integrity loss detection, light-scatter, baseline fluctuation

Introduction:

The use of particulate detection techniques is a key method for monitoring membrane filtration effectiveness and detecting filtration breakthrough. Of the available techniques, those that focus on light scatter have been used for many years. Two of the key detection parameters include turbidity and particle counting, which can be applied as process monitoring techniques. Recently, new laser-based techniques in turbidity analysis have led to the application of new and better methods for monitoring filter performance. These techniques are able to detect filtration integrity problems earlier and at lower detection levels. The purpose of this paper is to present one of these new and novel techniques that have improved the detection of filtration integrity losses beyond those conventional techniques. An overview of these analysis techniques will be covered. The data that is produced will then have a specific statistical analysis applied, which will further enhance the data for the detection of integrity loss. The three techniques that will be covered include: laser nephelometric turbidity, particle counting, and high sensitivity laser backscatter turbidity. The latter technique, with algorithms developed to enhance the detection of filtration failures, will be discussed on depth.

Background:

Turbidity has been recognized as a simplistic and basic indicator of water quality. It has been used for monitoring drinking water, including that produced by filtration for decades. Turbidity measurement involves the use of a light beam, with defined characteristics, to determine the quantity of particulate material present in the water or other fluid sample. The light beam is referred to as the incident light beam. The material in the water causes the incident light beam to be scattered and this scattered light is detected and quantified relative to a traceable calibration standard material. The greater the quantity of the particulate material contained in a sample, the greater the scattering of the incident light beam and the higher the resulting turbidity. In addition to turbidimeters, light obscuration particle counters have often been used to monitor filter effluent. While both instruments can be effective when monitoring for large-scale breakthrough events, neither has been found to be effective in the prediction of a breakthrough event or in the consistent detection of fine integrity losses. Other instrument technologies are available, but the economics related to obtaining and applying those technologies often limits their use [1].

Turbidimeters and the data they produce (i.e. turbidity values) represent the entire quantity of particulates present in a sample. The measured value is compared to a primary analytical standard that is used to calibrate the turbidimeter. A higher turbidity value equates to a higher particulate concentration for a given sample. This turbidity value can be reported in a number of different turbidity units of measurement—often selected based on the type of light source and detection angle used to make the turbidity measurement.

Any particle within a sample that is capable of scattering light from a defined incident light source, contributes to the overall turbidity in the sample. Thus, the goal of water filtration (or fluid filtration in general) is to eliminate these particles from solution. When filtration systems are performing properly, the turbidity of the effluent is characterized by a low turbidity value. Current turbidity instrumentation which typically utilizes incandescent or long wavelength light sources are effective for the detection of particles to a certain level of particle removal from filtered water, but they become less effective on super-clean waters, where residual particle size and count levels are very low. At these low levels of turbidity, the actual sensitivity to a turbidity change can be so small that such a change in the sample turbidity becomes indistinguishable from the baseline noise of the instrument. This baseline noise has several sources including: the inherent instrument noise (electronic noise), instrument stray light, sample noise, and noise in the light source itself. These interferences are additive and they become the primary source of false positive turbidity responses.

It is often the fine integrity losses and precursor conditions that are the most critical in assuring water quality and safety. For many pathogens, the concentration required to impact human health is extremely low. Therefore, a fine integrity loss such as that caused by a small hole in a membrane, or the beginning of a filter-breakthrough event can still have negative impacts on human health. These subtle changes in the filtration effectiveness are often not detected using conventional particle detection instrumentation and often these actual values are the sole criteria for monitoring the performance of a filtration system.

The advent of laser nephelometry can better address the low-level analysis of turbidity in cleaner water samples. Laser turbidimeters (also known as laser nephelometers) possess enhanced optical designs that yield greater sensitivity and baseline stability than traditional instrumentation. The primary distinction between a laser nephelometer and other traditional instrumentation is found in the incident light source and with the detector. The laser turbidimeter utilizes a highly colimated, focused light source that is primarily monochromatic. Typical light sources included lasers and laser diodes. The characteristics of this light source allow the light energy to be concentrated on a very small and focused area in the sample chamber of the instrument. This combination provides an incident beam with a high power density, which can be effectively scattered by particles. The detector is also of greater sensitivity and provides greater response to the specific wavelength(s) emitted by the incident laser light source of the instrument. Preferably, the peak of the detector response spectrum should completely overlap the spectrum emitted by the incident light source to generate maximum optical sensitivity. This combination of detector sensitivity, columinated light source, and the high power density of the laser provides for a very high signal-to-noise ratio for the laser turbidimeter. This signal-to-noise ratio enhances the sensitivity to detect very small changes in turbidity and to yield a very stable measurement baseline[2]. If the baseline variability is minimal with respect to instrument noise, such variability on clean particle-free water will also be minimal.

The characteristics that were just described are all necessary to generate a very defined analysis volume within the sensor. This is often referred to as the view volume, which is that portion within the turbidimeter or particle counter sensor where the incident light comes to a focal point and is then scattered by particles passing through this defined area. It is also that portion of the volume within the sensor that the detector “views” any scatter of the incident light. Thus, it the overlap of the incident light and the detector view area that constitutes the overall view volume of the instrument. It is of key importance to keep the view volume small but to also keep the power density of the incident light high. When the view volume is small, defined, and possesses a high power density, then a single or very small number of particles will be capable of generating enough scattered light to be detected.

Laser turbidimeters, and other instruments that provide high signal-to-noise ratios, will yield extremely stable measurement baseline levels in comparison to the baseline levels exhibited by traditional turbidimeters. With such stable baselines, very fine changes in the turbidity within a sample are distinguishable. This baseline stability emphasizes any changes in stability and can serve as an additional analysis parameter complimenting the directional trending of the turbidity measurement value itself.

Currently, turbidity and particle counting methods are only applied using the analytical measurement value (the turbidity value or particle count value) and the direction or movement of these values. For example, the turbidity value is usually monitored for an increase over some interval of time. If the measured analysis value exceeds some imposed level, it is indicative that the process used to produce the effluent stream may need to be investigated. Typically, the baseline variability is ignored and the variability is treated as inherent measurement noise. It was not until the development of instruments that were capable of producing extremely stable measurement baselines, such as laser turbidimeters, that the variability could be studied from a quantative and qualitative aspect. The data gathered has established that the variability can be treated as an independent measurement parameter, which is useful in the detection of fine integrity losses such as those that can occur in membrane filtration system. The variability can also be used in the prediction of large-scale breakthrough events that can occur in other filtration processes such as those used in drinking water plants.

Monitoring Techniques for Filtration Breakthrough:

When measurements are performed on a process setting, several methods can be used to analyze and interpret the data:

    Monitoring the measured value for a distinct step change
    Monitoring the trend in the measured value over a distinct increment of time.
    Comparing an interval of data against a pre-established baseline.

These common techniques are used to evaluate data and often present a responsive action to data that has already been logged.

A novel new method has been developed that uses basic statistical processing techniques to help predict a change in filtration performance prior to the gross failure of the process. One technique that will be discussed involves the application of a simple statistical technique to measurement parameters with a distinct set of common measurement and logging characteristics. This technique relates to analyzing and processing the variability of the turbidity signal into an independent parameter and then linking this parameter to the detection and qualification of filter breakthrough event. When monitoring filtration performance, the baseline should be characterized as quiet and stable. Typically, when a filtration breakthrough occurs, the turbidity or particle count level (i.e. counts) suddenly increases as the inflow of particles pass through the failed particulate boundary. Prior to the breakthrough, the baseline will often become unstable with respect to the subtle inherent variability. This variability increase will occur even though the turbidity measurement trend remains stable.

Figure 1 provides a turbidity and particle-counting chart of a typical filtration run at a conventional water treatment plant. The filtration mechanism in this application was a multi-media filtration technique in which 12 inches of sand are overlaid with 36 inches of anthracite. The anthracite itself is further distributed into layers based on particle size. The layers with the smallest particle size are deeper and the top layer contains the largest size of particles. This filtration mechanism, when aided by polymer chemicals is efficient at removing particles greater than 1 µm. In Figure 1, the turbidity and particle count levels of the effluent water leaving the filter were monitored over time. As the filer run progressed, increases in the turbidity and particle count levels were seen but the changes were not consistent enough to be a true indicator of breakthrough [3]. However, if allowed to focus on the individual baselines it can be determined that the variability of the baselines increases as the run progresses. This baseline variability is often written off as inherent instrument noise, when in reality it is a reflection of the filtration process.

Figure 1 - Turbidity and particle count monitoring of a filter run involving a multi-media drinking water filter.
(see PDF version to view Figure 3)

Methodology for the Measurement and Calculation of Baseline Variability:

The technique for enhancing the detection of pending filtration failure (i.e. filtration breakthrough) is intended to provide a simplified process statistical model that can be applied to the measurement parameters of process laser turbidity or process particle counters. The resulting variability value can then be treated as a separate and distinct monitoring parameter. In addition, the baseline variability parameter can provide a qualitative measure with regard to the nature of the particle breech. The variability is sensitive to both particle size and counts, which are linked when dealing with natural samples. When a loss in filtration integrity loss does occur, the monitoring technique not only detects the loss event, but it also detects the nature of the event with respect to threshold particle size.

The measurement of variability itself can be quantified through a single statistical process known as the percent relative standard deviation or %RSD (RSD in this paper). The RSD is calculated as the standard deviation for a given set of measurements divided by the average for the same set of measurements. The quotient can then multiplied by 100 to express this result as a percent. See equation 1 below:

RSD = (Stdevn / Avn) x 100 (1)

Where n = a defined number of measurements that are used to calculate both the average and the standard deviation.

The RSD parameter, when applied to a process measurement environment can have several variables that should be held constant once they are defined. These include 1) the number of measurements taken each time to generate the RSD value; 2) data set filtration; 3) data overlap; 4) measurement frequency; and 5) data logging rate. These are discussed in more detail below:

    Number of measurements:
The number of measurements used to generate the RSD value will impact the sensitivity and response time of this parameter. If the RSD value were generated using a very small data set (2–4 values), the resultant baseline generated would contain a significant amount of inherent noise. This noise could mask the sensitivity in much the same way that a low signal to noise ratio impacts the sensitivity of a process measurement. If too many values (15–20 values) are used to generate the RSD measurement, then the response time of the parameter to an impending particle event will be delayed.
    Data set filtration:
Depending on the application of the analytical instrument, the sample may be inherently noisy and require filtration of the data prior to the RSD calculation. For example, a sample that contains a significant amount of bubble interference may have such a high level of noise in the baseline that the variability may be lost. In such a case, a large number of measurements may be taken. From this set, a certain percentage of the highest values and a percentage of the lowest values are excluded from the data. The remaining core values are then used to generate the RSD value. Measurement systems that can perform measurements at a high rate (>5 measurements/second) can often apply successful data filtration techniques.
    Measurement or data overlap:
Data overlap is an approach in which a portion of the measurement data used to generate one RSD value is combined with new measurement data to generate the next RSD value. Measurement overlap may be required for analysis systems in which measurements are performed slowly. The sensitivity of the RSD parameter may decrease if high measurement overlap is used, which will also increase the response time to an impending event. Ideally, the less data overlap that is applied, the greater the sensitivity of the RSD parameter to an impending filtration integrity event.
    Measurement frequency:
The measurement frequency typically correlates directly to the response time for any particle event. A slow measurement frequency would likely require more data overlap and/or smaller data sets to generate the RSD value. However, a high measurement frequency would allow for less data overlap and the application of larger data sets when determining the RSD values. Data filtration applications will be more successful on technologies with high measurement frequency.
    Data logging rate:
Data logging rate can impact the performance of the RSD parameter if measurement values are logged before the RSD algorithm is applied. Whenever possible, the data-logging rate should approach the measurement frequency.

Figure 2 provides an example showing how the sampling range and RSD algorithm can be treated as a separate parameter. In this figure, another dual media filter from a drinking water plant is being measured using a laser turbidimeter, (the green trace). In addition to the turbidimeter measurements, several different RSD parameters were generated. These parameters were generated using sample set sizes of 3, 7, 12, and Figure 2 - Comparison of the RSD and laser turbidity sensitivity across a filter run cycle for a conventional drinking water plant dual-media filter.
(see PDF version to view Figure 2)

20 data points (dark blue, purple, yellow and light blue traces respectively) to generate respective trend lines. These trend lines were generated using overlapping data created by eliminating the oldest point and adding a new point to each successive set of data.

Figure 2 illustrates three points. First, all the RSD parameters display a high degree of sensitivity to particle concentrations in the sample. Second, the magnitude of the RSD response is dependent on the sampling algorithm. The smaller sampling sizes produce a higher response when compared to the larger sampling sizes. However, the larger sampling sizes do not necessarily increase the response time to any of the turbidity changes recorded during this run. This figure illustrates how the RSD parameter can be used to assess filtration performance through the magnification of the turbidity changes that do occur. Third, the RSD algorithm will be responsive to both positive and negative turbidity changes. This graph provides an example of how a specific parameter can be complemented with a second parameter to amplify changes in particle concentration in the sample.

Several internal studies have been completed in which the RSD parameter was applied as a real time process-monitoring tool. Laser turbidity data from two several of these applications studies is presented in the data and results section. In addition, the RSD parameter has been demonstrated to be more sensitive to particles in a defined size range and less sensitive in other size ranges. This is a function of the optical design of the instrument, specifically the wavelength of light used and the detection angle of the scattered light detector. Once the sensitivity of the RSD parameter has been defined, it can be utilized in conjunction with the turbidity response to provide a quantative assessment of the particle event. An example on how the detected event can be used in a qualitative aspect will also be discussed in the data and results section.

Data and results:

Two examples of how the RSD algorithm can be used to monitor filtration integrity will be discussed here. The first application involved a filtration run on a pilot-scale direct filtration plant testing conventional water filtration processes. The second application involved integrity monitoring of the filtrate leaving a full-scale ultra-filtration membrane module. The module underwent a series of fiber cutting integrity tests during a membrane selection process for a new membrane-based drinking water plant. These two applications are discussed in more detail below.

Conventional Filtration Monitoring:

The goal of this study was to determine the impact that different filtration media size would have on the overall filter run time. The filter run through this pilot plant closely simulated the flow rates, head pressure, and design of the full-scale filter. After a backwash and ripening period, the filter run was allowed to proceed until breakthrough was confirmed using the traditional turbidity measurement guidelines. In this test, the filter effluent was monitored for breakthrough and when the turbidity exceeded the regulatory established value of 0.3 NTU or 300 mNTU (note that 1 NTU = 1000 mNTU) breakthrough was confirmed for the filter. Figure 3 presents the data from this filter run. The predominant monitoring instrument for this membrane plant was a laser nephelometer with a 90-degree light scatter. In addition to turbidimeter measurements, the filtration run was monitored using three different process RSD algorithms. These were based on sample sizes involving the previous 3, 7, and 12 laser turbidity measurements. The RSD data was generated and plotted along with the laser turbidity measurements. The data log interval for all parameters was 1-minute intervals.

In figure 3, the turbidity of the filter run is displayed on the left y-axis and is illustrated by the pink trace. The RSD response to the different data size algorithms is plotted on the right y-axis and is represented by the purple, yellow and blue traces. The brown trace represents the turbidity-reporting limit for filter effluent correlating to when the filtration process is considered to be at breakthrough. There is one significant RSD peak for the RSD-3 algorithm, which indicates a significant change in particle concentration in the effluent. This spike was seen prior to the actual breakthrough time of 1554. This RSD-3 peak appears to be an indicator for the actual breakthrough event and does overlap the eventual laser turbidity spike. The actual cause of the filter breakthrough was the failure of an alum feed pump used to promote particulate coagulation and thereby enhance filtration. The pump actually failed at 1500, which was approximately 5 minutes prior to the upward trend in the RSD 3 signal. Thus, the RSD-3 peak indicated a significant change in the baseline noise, which may reflect the changes to the process stream from the pump failure.

Figure 3 – Laser turbidity monitoring for filtration breakthrough using a set of different baseline variability (RSD) algorithms.
(see PDF version to view Figure 3)

Smaller RSD responses were also observed with the RSD 7 and RSD 12 trend lines. These algorithms resulted in more delayed responses to the eminent particle event, but they did occur slightly before the noticeable upward trend in the turbidity values. The example provided in figure 3 demonstrates how different sample set sizes can dictate the predictability and the significance of the RSD response to the impending particle event[4].

Membrane Integrity Monitoring:

Membrane integrity monitoring has become recognized as a necessary and required process for the production of water for human consumption. When testing a membrane system for a drinking water application, it is important to determine the detection levels for an integrity breech. One common method is fiber cutting/pinning (referred to as fiber cutting) tests, which involves the deliberate severing of a fixed number of fibers in a membrane module. The damaged membrane is then brought back on-line and the filtrate is monitored for the presence of particulate material until a stable baseline is established. The module is repaired in a series of steps; usually one fiber at a time, and a new filtrate baseline is established after each fiber is repaired. This process continues until the module integrity is completely restored.

The turbidimeter used to monitor the membrane filtrate stream for the duration of the fiber cutting tests was a laser turbidimeter with a backscatter detection angle of 40 degrees relative to the incident light beam. The instrument was capable of performing measurements at a high frequency of approximately 20 measurements per second. The data set was first filtered by removing the top 10 and bottom three values. No overlapping measurement data was used between measurements. The remaining seven measurements were used to calculate the averaged turbidity value and the respective RSD value. Both values were then logged and plotted as separate process measurements. Figure 4 provides the monitoring data for a membrane fiber pinning test on an ultra-filtration membrane module. The module contained approximately 20,000 fibers there were housed in two modules capable of producing filtered water up to 50 gallons per minute. Turbidity and RSD of the filtrate was plotted through a sequence of fiber cutting/pinning events. The left-hand axis displays the net light scatter calc response, (synonymous to laser turbidity) and is represented by the red trace. The right-hand axis displays the RSD response, which is represented by the green trace.

The data in Figure 4 represents the pinning of severed fibers in chronological order from left to right. Black vertical dashed lines separated each test. The upper control limits (UCL) for each parameter were calculated and plotted on the graphs. These are represented by the thicker green and red horizontal traces for the turbidity and RSD process measurements.

The UCL values were derived from the baseline data established either before or after the series of integrity tests. This was for a time when the membrane was known to be free of any integrity breech. The UCL calculations were based on the data covering the entire baseline run. For the specific parameter, the average and standard deviation was calculated for the baseline run. The standard deviation was multiplied by a factor of three and this product was added to the averaged value over the baseline run to generate a 99 percent confidence for the upper control limit. This generates the UCL for that parameter. Equation 2 summarizes this UCL calculation[5]:

UCL = Average run + 3 * Std Deviation run (2)

Each UCL value was considered to be the limit of sensitivity for each respective monitoring parameter. The UCL value was used as the threshold that must be exceeded to be a positive response to a specific fiber-cutting test. The relative change between the established baseline (under normal operating conditions when the membrane pilot did not have compromised fibers) and the resultant baseline generated during a fiber-cutting test was calculated. This result is expressed as a percent change.

Figure 4 - Graphical display of laser turbidity and baseline variability monitoring for membrane integrity loss during a series of fiber cutting tests. Numbered Events are as follows: Event 1–4+ cut fibers and 2 pinholes; Event 2 –4 Cut fibers and 2 pinholes; Event 3–3 cut fibers and 2 pinholes, Event 4–2 cut fibers and 2 pinholes; Event 5–1 cut fiber and 2 pinholes; Event 6–2 pinholes, Event 7–1 pinhole. (see PDF version for Figure 4)

In Figure 4, the UCL was exceeded at all levels of cut fibers, for both parameters relative to events numbered 1 through 5. The pinhole breeches, represented by event 6, resulted in a positive detection for both the RSD and the turbidity parameter. This graph provides an example of how the two parameters serve as complementary detection of the different integrity events. However, what is not displayed is the actual relative change in the response of the two monitoring parameters relative to the specific membrane integrity test. These changes in response to each baseline are displayed in Table 1, titled, “Calculated Average Response for Each Integrity Test Performed on the UF Module.”

Table 1 - Calculated Average Response for Each Integrity Test Performed on the UF Module (see PDF version to view Table 1)

Table 1 displays the averaged response for the turbidity and the RSD parameter for each of the membrane integrity tests performed on the UF module. Note that for the same event, the response of the RSD parameter was between one and two orders of magnitude greater than the turbidity response. The additional sensitivity becomes a significant factor when the effluent from numerous membrane modules is combined and dilution of an integrity breech occurs. If detection of a fine integrity loss is needed, the RSD parameter may be the sole parameter capable of detecting such a breech.

The Effect of Particle Size on the RSD Parameter:

Particle size and concentration have long been known to impact turbidity measurements due to then specific interaction that different sized particles have on a beam of monochromatic light. Past studies have been shown that light scatter is highly wavelength dependent and is also dependent on the size, shape, and morphology of the particle. For many of the same reasons, particle size and concentration can also impact the response of

Download: Here

About the Author

Mike Sadar

Sponsored Recommendations

Blower Package Integration

March 20, 2024
See how an integrated blower package can save you time, money, and energy, in a wastewater treatment system. With package integration, you have a completely integrated blower ...

Strut Comparison Chart

March 12, 2024
Conduit support systems are an integral part of construction infrastructure. Compare steel, aluminum and fiberglass strut support systems.

Energy Efficient System Design for WWTPs

Feb. 7, 2024
System splitting with adaptive control reduces electrical, maintenance, and initial investment costs.

Blower Isentropic Efficiency Explained

Feb. 7, 2024
Learn more about isentropic efficiency and specific performance as they relate to blowers.