Using Baseline Monitoring Techniques to Assess Filter Run Performance

Jan. 31, 2005

About the author: Michael J. Sadar is an Application Scientist with the Hach Company, Loveland Colorado and Kathleen Bill is the Water Operations Specialist for the City of Aurora, Colorado

undefined

Abstract:

Determining if a filter run is approaching a breakthrough condition is a daily challenge for water treatment plant (WTP) operators. Current techniques look for upward trends in either turbidity or particle counts of the filter effluent. However, this does not consistently predict actual filter breakthrough.

This study’s objective is to determine if data from different particle detection technologies can be better utilized to characterize filter performance. Simple statistical techniques will be used to interrogate the stability of the baseline filter effluent water values. This study utilizes both traditional and new particle detection technologies to monitor filter effluent for the entire filter run. Each type of technology (particle counters, regulatory turbidimeters, and laser nephelometers) will be evaluated separately on each filter run to determine which generates the best correlation between baseline stability and filter performance. The ability of each of these particle detection technologies to predict filter breakthrough will be evaluated. Ultimately, this study will determine if such correlations can provide a definitive means of interpreting filter performance and can then be used to predict filter breakthrough.

Data from complete filter runs at a pilot-scale plant will be used. The data from the filter runs, which were allowed to proceed through breakthrough, will help determine if this information can actually predict breakthrough. In addition, effluent data from several full-scale water treatment plant runs were analyzed and an example was presented. This information will be used to confirm the pilot-plant data and increase the credibility of the pilot study model.

Results have shown that when laser nephelometers, and (to a lesser degree) particle counters, are used to monitor filter effluent, the measurement baseline stability decreases (the noise level increases) as the filter run progresses. The decrease in measurement stability, when observed during filter effluent monitoring is often attributed to electronic noise in the instrument. This study provides evidence that the “noise” is not attributed to the laser nephelometer, but instead, is due to subtle changes in the sample.

Challenges in Predicting Filter Breakthrough

A major challenge and a primary goal for water treatment plants is the maximization of filtration output while simultaneously providing the consumer the highest quality water possible. Studies have shown that when filter effluent turbidities and particle counts are kept low and constant, the risk of microbial contamination at the filter is low and the overall water quality is high.

All water treatment plants want to avoid a filter breakthrough event. Because of the concern surrounding filter breakthrough, stringent regulations continue to be placed on the effluent water to minimize the risk of pathogen breakthrough. The Enhanced Surface Water Treatment Rule1 is now implementing regulations specifying that turbidity must be monitored on every filter. This specific rule does two things:

  • First, it offers continuous profiling of every filter run and provides insurance that the filtration processes are performing well.
  • Second it provides a more rapid and direct troubleshooting solution when a filtration problem occurs.
  • Since each filter is now being monitored, it would be beneficial to use the data already being collected to predict filtration problems before they actually occur.

    Even in the midst of these new filtration-monitoring requirements for turbidity, most well run drinking water plants (DWP) do not rely heavily on rising turbidity levels to predict when the filter run should be terminated. Instead of using rising turbidity to predict filter breakthrough, they attempt to be more proactive and terminate the run based on events other than monitoring the filter effluent. Three accepted methods for determining timing for termination of a filter run are:

      1) Loss of head pressure

      2) A timed run -- a set filter run duration based on past filter performance

      3) An increase in turbidity levels – Regulated!

    Although the run is terminated when any of the above occurs or dictates, for any of the filter run termination methods cited above, the breakthrough condition could have already occurred. To avoid this, most plants apply a conservative filter run time and terminate the run even if the filter continues to perform well. This has proven to be the safest and most proactive approach, and the practice is still backed by monitoring the filter’s performance.

    Using particle detection instrumentation to predict a filter breakthrough prior to the event actually occurring would be a proactive and beneficial approach to filter management. If successfully applied, this information would provide the water treatment plant additional throughput by delaying backwash where appropriate and more time to react to an unforeseen filter problem leading to a breakthrough of particles. Instrumentation such as particle counters and turbidimeters can be used to detect filtration problems, but are often not used consistently enough. Insufficient data has been gathered by most water treatment plants and so does not aid them to predict every event.

    Issues Associated with Regulatory Compliant Particle Monitoring Technologies

    One problem associated with the current instrumentation and the associated monitoring method is that the particle event is likely to be occurring before the water treatment plant can react. In many situations, if breakthrough does occur, the effluent has been contaminated and the risk of pathogenic contamination has increased. History has proven that it is not very beneficial to react to a turbidity spike after it is detected.

    A second problem is related to instrument sensitivity limits. Turbidimeters designed to comply with EPA 180.12 specifications may lack the sensitivity necessary to see low-level filtration problems. A prime example is the 1991 Cryptosporidium outbreak in Milwaukee, Wisconsin. According to reported turbidity measurements, the combined effluent turbidity levels never exceeded the 1991 regulatory limit of 0.5 NTU1. Part of the problem may be that the instrumentation in use at that time was not sensitive enough to provide conclusive data on very low-level turbidity events.

    Current turbidimeter designs are often unable to detect ultra-low level turbidity changes because of instrument technology limitations. Previous turbidity regulations mandated a sensitivity level to 0.050 NTU and most instruments are able to detect differences well below this level (at least down to 0.010 NTU). However, at levels below 0.010 NTU, changes are often attributed to instrument noise and are usually discarded.

    If, when designing instrumentation, the baseline sensitivity was increased and instrument noise was held to extremely low levels, minor turbidity changes and their source would be easier to trace. If the detection sensitivities were increased and if the instrument could consistently and dependably sense the smallest changes in turbidity, then the measurement confidence would also increase. Using this instrumentation, low-level turbidity changes could predict filter spikes. Unfortunately, current regulatory turbidimeters have not successfully provided consistent performance at ultra-low turbidity levels.

    In summary, basing the initiation of a backwash on a spike in turbidity or particle counts has been difficult. Historical data is often inconsistent across multiple runs and this inconsistency makes it difficult to determine the necessary parameters to initiate a backwash. To compound the problem, the data obtained from particle counters and traditional turbidimeters does not always agree with respect to event detection. Also, when an event does occur, the determination must be made as to whether it is a minor or major event. If the event is minor and is of short duration, initiating a backwash may be unnecessary. The goal to applying particulate detection instrumentation to filter effluent is to consistently predict major particle spikes or breakthrough events using defined criteria.

    Applying New Technologies to Low Level Measurements:

    A new generation of instrument technologies may meet the requirements of ultra low level monitoring, and in doing so, restore user confidence when applying the data to real-life effluent monitoring. One breakthrough technology, laser nephelometry, combines a stable, defined, light source with a highly sensitive nephelometric detector.3 This combination yields a nephelometer with an exceptionally stable baseline due to very low instrument noise. The optical design of the laser nephelometer provides high sensitivity to light scatter from particles that are less than 1 µm in diameter and minor turbidity changes are easily detected against the stable baseline. Hach Company offers the FilterTrak® 660 (FT660) laser nephelometer.

    A second instrument, the particle counter, has been applied to filter effluent with good results. The most common on-line particle counters are those with particle size sensitivity down to 2 µm in diameter. Studies have shown particle counts often begin to trend upward prior to a filter breakthrough. A regulatory turbidimeter often will not see the same upward trend until later in the filter run. However, the particle counters have not been shown to consistently predict breakthrough and so the confidence level in them predicting filter breakthrough by detecting ultra low level events has not been high. Those who have spent large amounts of time collecting and interpreting data have achieved the most highly successful implementation of particle counting for breakthrough prediction.

    Figure 1 displays a typical filter run monitored by a particle counter, a regulatory turbidimeter, and a FT660 laser nephelometer. Overall, the run is very quiet and the turbidity baselines are very stable. At first glance, the data shows that the filter was performing well within its regulatory limits. However, a closer look at the data from each instrument may provide additional information that could be benefit the operator.

    Figure 1 — WTP Filter # 12 Effluent Particulate Monitoring, May 5, 2000
    (refer to the PDF version to see the chart)

    One feature of Figure 1 is the amplitude of the turbidity baseline. As the run progresses, the amplitude increases very slightly. This amplitude is often is referred to as baseline noise and is often attributed to instrument (electronic) noise. In this graph however, the amplitude is not consistent throughout the duration of a filter run—which would be the case if it were instrument noise. Instead, this change in amplitude continues to increase as the run progresses. This information points to a slow but deliberate change in the filtration mechanism or in the process prior to filtration.

    The data showing the change in amplitude is clearly apparent for the laser nephelometer and to a lesser degree is observed on the regulatory turbidimeter. In a limited fashion, the change in amplitude is also shown for the particle counter. The amplitude change is not as significant on the latter two instruments since part of the fluctuation is lost in the baseline noise. It is interesting to note that the laser nephelometer easily displays the increasing amplitude of the baseline. Figure 1 displays the effluent baseline fluctuations for laser turbidity and particle counting during an exceptionally quiet run. In most cases, the change in amplitude is far more dramatic.

    Figure 2 shows a more typical filter run (the next 24-hour monitoring period of the same filter shown in Figure 1). The filter effluent was monitored using the same three particle detection instruments. During this run, the turbidity amplitude increases dramatically as the run progresses. The particle count noise appears to increase slightly and shows a slight increasing trend in counts from a baseline of 2 to 3 cts/mL. The same minor upward trends are observed with the FT660 and regulatory instruments; however, it is the baseline noise that is dramatic.

    Figure 2 WTP Filter # 12 Effluent Particulate Monitoring, May 6, 2000
    (refer to the PDF version to see the chart)

    One question that is to be considered is the cause of this progressively unstable turbidity baseline that correlates to filter run length. A possible explanation for this phenomenon could be linked to the optical design of the FT660 laser nephelometer and to the nature of particle scatter.

    The Theory Relating to the Increased Amplitude of the Turbidity Baseline

    The FilterTrak 660 optical design incorporates an incident light beam with a very defined, coherent, laser-light source. This beam has relatively no divergence when it passes through ultra-low turbidity water. The incident light that reaches the bottom of the sample chamber is absorbed into a special material, which virtually eliminates all internal incident light reflections. Because the residual stray light from this optical system is minimal, an exceptionally stable, low-noise baseline is produced. The incident beam has a small diameter, but contains a very high level of energy. A higher incident light beam power density results—much higher than in a traditional turbidimeter. The result is a substantially higher signal to noise ratio for the FT660. With a higher signal to noise ratio, the instrument sensitivity to ultra-low turbidity fluctuations (those less than 0.001 NTU) can be reliably detected. When this superior incident light system is coupled with a highly sensitive PMT detector, the sensitivity to light scatter from particles is further enhanced.(3)

    Referring back to Figure 2, we observe an increase in the amplitude of the turbidity signal (baseline noise) as the filter run progresses. A plausible explanation for this noise is due to the detection of a relatively small number of larger sized particles that may be detaching from the filter media as the run progresses. The theory behind the mechanism of detachment is below.

    A part of the process for the pilot plant and the full-scale water treatment plant was to incorporate a filter aid polymer as part of the filtration. The polymer essentially provides a “sticky” coating on the filter media that is created by the polymer charge. Polymers can be negative, neutral, or positively charged. In this study, the polymer charge was neutral. As the filtration progresses, the particles remaining in the settled water will, in theory, adhere to the “sticky” surfaces of the filter media and enhance the filtration process. At the beginning of a typical filter run this process is very efficient and nearly all of the particles adhere to the filter material. As the run progresses, more and more particles bind to the material until most of the binding sites are occupied. With fewer binding sites available, two scenarios can occur.

      1) Since all the binding sites are taken, additional particles cannot bind to the media and so they begin to work through the media and eventually, into the effluent.
      2) When the media is agitated as hydraulic forces are applied in the filter, some particles that are currently attached to the media and break free.

    The result in both situations is that some of the particles will begin to work their way through the filter and with enough time, make their way to the effluent stream. The chance for particles either breaking off the media, or working their way through the filter becomes greater as the filter run progresses. Eventually, particles will begin to trickle through the filter in very low numbers, and it is these few particles that are being detected and seen as the increase in the amplitude of the baseline turbidity.

    The reason the FT660 is so sensitive to low numbers of particles is due to the beam geometry and high light scatter efficiency of this instrument (described previously). As a particle moves into the incident beam, it may be of a size that is large enough to result in some detectable light scatter. Because particles are consistently moving in and out of the beam, the resultant baseline noise, such as that seen in Figure 2 increases. As more and more particles enter the effluent sample, the fluctuations will increase, but the overall baseline of the turbidimeter and the particle counter will increase. At this point, the filter is beginning to lose its effectiveness and an upward trend in turbidity and/or particle counts may be observed, indicating that filter breakthrough is approaching.

    Study Goals

    The application of particle sensitive instrumentation may be used to assess filter performance as the filter run progresses. This study will determine if either particle counters or laser nephelometers can be used to correlate the baseline turbidity or particle counting fluctuations to filter performance. Several process algorithms, that measure the standard deviation based on a fixed number of consecutive running measurements, will be used to assess and quantify the filter run performance. This algorithm will be referred to as the RSD algorithm (Relative Standard Deviation) and will be applied as a process calculation on both the laser nephelometer and the particle counting data from the filter effluent stream. In addition, the algorithm will be applied to the data to determine if it could signal the approach of a catastrophic filtration event, such as a breakthrough.

    The final goal will be to apply those algorithms to both the full-scale filter data and the pilot scale filter runs. These pilot scale filter runs are runs where a filter breakthrough condition was actually observed. Again, we will determine if the algorithms can be applied to either particle counter or laser nephelometer data with a high level of success.

    Materials and Methods

    This study was separated into two phases. Phase 1 involved the use of a pilot-scale plant modeling a full-scale direct filtration plant. During this phase, three of the pilot filter runs concluded with a breakthrough of the filter. Phase 2 involved the collection and analysis of data from over 50 consecutive filter runs conducted at a full-scale water treatment plant in the summer of 2000. A standard filter run was selected from this collection of data and was used to determine if the same algorithms used in the pilot study could be used to quantify the performance of the full-scale filter run.

    Phase 1

    The pilot study was conducted at a direct-filtration water treatment plant for the city of Aurora, Colorado. The pilot plant is designed to simulate the direct filtration processes of this WTP. The water flowing through the pilot plant simulated 40 million gallons per day (MGD). The raw water first flowed into the pilot plant flocculation basin. As this water entered the basin, it was injected with three chemicals. The first is a polyelectrolyte cationic polymer (PEC) with a resultant concentration of 1.6 mg/L. The second chemical is alum at 7 mg/L, and the third is chlorine, with a concentration of 4 mg/L. After a detention time of approximately 1 hour, the flocculated water flows to two filters, labeled #3 and #4.

    Filter #3 is a coarse dual-media filter. This filter is comprised of a 12-inch sand layer with a media diameter of 0.55 to 0.65 mm. Covering the sand layer is a 60-inch anthracite layer. The diameter of the anthracite particles is between 1.2 and 1.4 mm.

    Filter #4 is a fine dual-media filter. Like filter #3, it consists of a 12-inch sand layer covered by a 60-inch anthracite layer. The diameter of the filter #4 media is slightly smaller—the sand diameter ranges from 0.50 to 0.60 mm and the anthracite particle diameter range is between 1.0 and 1.2 mm.

    A total of three breakthroughs, referred to as A, C, and D, were successfully forced on these filters. Each filter was monitored through breakthrough using a laser nephelometer (the FT660), and a particle counter (2200PCX). Data was recorded at 1-minute intervals throughout the entire filter run. The logged data was pulled into a small SCADA system that was designed specifically for this pilot plant. From the SCADA system, data could be downloaded as a CSV file into an Excel® (Copyright Microsoft Corporation) spreadsheet where further analysis was conducted. Table 1 summarizes the mechanism used to cause each of the breakthroughs.

    Table 1 Summary of the Direct Filtration Pilot Plant Breakthroughs
    (refer to the PDF version to see the table)

    For each of the runs resulting in breakthrough, the logged data was transferred to an Excel spreadsheet to verify that the instruments monitoring the effluent water did, in fact, see the breakthrough. Next, four different process algorithms (each based on relative standard deviation (RSD)) were performed on this data to determine if any of these algorithms could assist in predicting the breakthrough and further assess the effluent quality during the run. The RSD is simply the standard deviation based on a set number of measurements divided by the mean (average) of the same measurements. The resulting value was expressed as a percent. A total of four RSD algorithms were evaluated on the data. These were based the number of measurements taken to generate the RSD—3, 6, 12, and 20 measurements.

    The algorithms were applied to the data in a way designed to simulate their use as a potential real-time application. These algorithms were used in a process format and recalculated the RSD value each time a turbidity or particle counter measurement was performed. This technique was created to attempt to demonstrate that statistical methods can be applied real-time to data as it is collected. Real time statistical results can be used to help judge a filter run’s performance as the run progresses.

    After the RSD calculations are performed on each of the filter runs, we will determine which algorithm (based on 3, 6, 12, or 20 measurements) is most suited for the assessing the noise of the filter run if it were applied real-time. The algorithm that shows the greatest sensitivity (largest RSD values) would be most suited for this assessment. Each algorithm will be tested to see if the breakthrough event could have been predicted. In addition, we will determine if the assessed data from either the particle counter or the laser nephelometer offers an advantage in evaluating filter performance and predicting breakthrough.

    Phase 2

    The full-scale water treatment plant is a 30 MGD plant that uses dual-media filtration. The treatment of the raw water involved flocculation, followed by lamella plate sedimentation. A non-ionic filter aid is applied to the filter after each backwash.

    The filter effluent from each run was analyzed in triplicate by concurrently using three FilterTrak 660 (FT660) laser nephelometers. In addition, the effluent was monitored for total particle counts with a size greater than 2 µm. All four instruments were installed on the effluent stream in parallel, with flow settings that were within 10% of each other on the FT660 instruments, and a flow of 100 mL/minute on the particle counter. Prior to data collection, the laser nephelometers were calibrated using primary calibration standards.

    During this study, over 50 filter runs were evaluated to determine if the baseline noise increases as the filter run progresses. From this data, the authors selected a total of seven consecutive filter runs to determine if the same algorithms used in Phase 1 of this study could be applied to this real-world data. The data was analyzed to determine if the algorithms are appropriate for use on this data and if so, which algorithms are appropriate for assessing the “noise” level in the effluent stream. In addition, the evaluated data was examined to see if any of these algorithms could be applied to the data to predict a filter breakthrough.

    Results and Discussion – Phase 1

    The plant operator initially observed that breakthroughs A, C and D were only detected by the particle counter. After further investigation, it was determined that this was due to improper scaling of the SCADA inputs from the laser nephelometer. The FT660 measurements were converted to the proper mNTU scale and the resolution of the nephelometer was correctly presented. After the corrections, the data showed that the breakthroughs were easily detected by the laser nephelometer.

    The filter run in which breakthrough A occurred is displayed in Figure 3. This run is characterized by a defined ripening period at approximately 1400, followed by a steady-state filter run that lasts approximately 60 minutes. At 1500, the alum feed pump is halted to force a breakthrough condition. The particle counter and laser nephelometer both begin to observe the breakthrough at approximately 1534 and start to trend upward simultaneously. At 1554, the regulatory turbidity limit of 0.3 NTU (300 mNTU) is exceeded. The time when the regulatory limit is exceeded will used as the reference point for the breakthrough event.

    Figure 3 — Filter Breakthrough Data –Run A – Aurora Pilot Plant, June 9, 2001
    (refer to the PDF version to see the chart)

    The relative standard deviation (RSD) was calculated for both the turbidity values and the particle count values throughout the course of the filter run shown in Figure 3. The RSD values were calculated based on a set number of running measurements, which were used to calculate the average and standard deviation values. (These values were then used to generate the RSD value.) Four measurement averages were taken beginning with 3 (denoted RSD-3), 7 (denoted RSD-7), 12 (denoted RSD-12), and 20 (denoted RSD-20).

    Figure 4 displays the RSD algorithms, which are overlaid with the turbidity measurements during the filter run. Figure 5 displays the plotted particle count data.

    Figure 4 Turbidity Breakthrough Data — Using Relative Standard Deviation to Predict Breakthrough — Aurora Pilot Plant Run A, June 9, 2001
    (refer to the PDF version to see the chart)

    In Figure 4, the turbidity of the filter run is displayed on the left y-axis and the RSD algorithms are plotted on the right y-axis. Two dramatic RSD peaks, RSD-3 and RSD-20, are displayed prior to the breakthrough time of 1554. The RSD 20 peak resulted from turbidity changes during the ripening period and if using just this data would be interpreted as a false positive spike. The confusion is due to the use of a large number of samples to generate the RSD values. On the other hand, the RSD-3 peak appears to be more indicative of the actual breakthrough event. This RSD-3 peak does overlap the turbidity spike as well. Further, the beginning of the RSD-3 peak was approximately five minutes after the alum pump “failed” (at 1500). The magnitude of the RSD-3 peak indicates a significant change in the baseline noise, which may reflect the changes to the process stream from the pump failure.

    The other RSD algorithms shown in Figure 4 (RSD-7 and RSD-12) also display far less dramatic peaks as a result of the breakthrough. The two algorithms begin to show a peak approximately 5 minutes before the start of the turbidity peak upward trend. In this case, the larger number of samples used to generate RSD values results in a dampening effect on the turbidity noise of the baseline. Because of this, these two algorithms were not effective in predicting the breakthrough. Overall, the shortest RSD algorithm was the only one that was beneficial in this application.

    The particle count and RSD overlay is displayed by Figure 5. Here the total particle counts are on the left y-axis and the RSD values are plotted on the right y-axis. In this plot, only one algorithm, RSD-3, showed a significant RSD peak prior to the actual breakthrough at 1558. The RSD-3 peak is significant, the maximum value 7802 vs. a baseline value of 18. This peak began to trend upward within 2 minutes after the alum pump “failure; ” an excellent indicator of an upset in the process before the actual breakthrough spike was seen.

    The particle counter RSD algorithms from RSD-7, RSD-12, and RSD-20, are also displayed in Figure 5. Slight increases that coincide with the actual particle count spike are observed, but could easily be misinterpreted as baseline noise.

    Figure 5 Particle Count Breakthrough Data — Using Relative Standard Deviation to Predict Breakthrough — Aurora Pilot Plant Run A, June 9, 2001
    (refer to the PDF version to see the chart)

    Similar results were observed with breakthroughs C and D. However, it was much more difficult to interpret their RSD spikes because of shorter steady-state conditions during these two filter runs. The shorter steady-states led to much higher RSD baselines and made determining distinct RSD peaks prior to breakthrough much more difficult for the RSD-7, RSD-12, and RSD-20 algorithms. The only algorithm that showed any consistency for both the particle counter and the turbidimeter was the RSD-3 algorithm. Using this algorithm resulted in significant RSD peaks before the turbidity and particle count values started trending up for the breakthroughs.

    In summary of Phase 1, the pilot scale data allowed us to apply the RSD algorithms to data in a simulated real-time application. However, due to the short duration of the steady-state condition of two filter runs, only one reasonable application resulted. This filter run, designated A did show that the process algorithms could be applied and could assist in predicting the breakthrough without the chance of a false positive peak. The best algorithm to apply to both the laser nephelometer and the particle counter readings was the RSD-3.

    The three filter breakthroughs were easily detected at the exact same time by both instruments that were monitoring the effluent stream. It is critical to appropriately scale the laser nephelometer analog output signal to allow full use of this instrument’s detection sensitivity. Initially, the improper scaling of the laser nephelometer measurements led the operator to believe that the instrument was blind to the breakthrough.

    Results and Discussion – Phase 2

    In phase two we examined seven consecutive filter runs from the full scale WTP. During each of these runs, the effluent turbidity and particle counts were recorded at 1-minute intervals. The turbidity measurements were performed with the laser nephelometer (FT660). This instrument exhibits higher sensitivity and greater baseline stability than the traditional process instruments and allows the turbidity to be measured in mNTU units.

    The same approach to the data used in Phase 1 was applied in Phase 2. The four RSD algorithms were applied to the full-scale plant data. After these calculations were complete, a plot was generated showing the RSD values overlaid on the turbidity measurements. Figure 6 provides an example of the overlay turbidity graph.

    Figure 6 Filter Run Turbidity vs. Relative Standard Deviation for WTP Effluent
    (refer to the PDF version to see the chart)

    In Figure 6, the filter run for May 6, 2000 was again analyzed for its turbidity and the assessment of its respective baseline noise (See Figure 2). The left y-axis represents the turbidity and the right y-axis represents the RSD calculations of the four different algorithms, over the

    Download: Here

    Sponsored Recommendations

    Blower Package Integration

    March 20, 2024
    See how an integrated blower package can save you time, money, and energy, in a wastewater treatment system. With package integration, you have a completely integrated blower ...

    Strut Comparison Chart

    March 12, 2024
    Conduit support systems are an integral part of construction infrastructure. Compare steel, aluminum and fiberglass strut support systems.

    Energy Efficient System Design for WWTPs

    Feb. 7, 2024
    System splitting with adaptive control reduces electrical, maintenance, and initial investment costs.

    Blower Isentropic Efficiency Explained

    Feb. 7, 2024
    Learn more about isentropic efficiency and specific performance as they relate to blowers.