Multidimensional exploratory techniques, such as the Principal Component Analysis (PCA), have been used to analyze long-term changes in the ﬂow regime and quality of water of the lowland dam reservoir Turawa (south-west Poland) in the catchment of the Mała Panew river (a tributary of the Odra). The paper proves that during the period of 1998–2016 the Turawa reservoir was equalizing the river’s water ﬂow. Moreover, various physicochemical water quality indicators were analyzed at three measurement points (at the tributary’s mouth into the reservoir, in the reservoir itself and at the outﬂow from the reservoir). The water quality assessment was performed by analyzing physicochemical indicators such as water temperature, TSS, pH, dissolved oxygen, BOD5, NH4+, NO3-, NO2-, N, PO43-, P, electrolytic conductivity, DS, SO42- and Cl- . Furthermore, the correlations between all these water quality indicators were analyzed statistically at each measurement point, at the statistical signiﬁ cance level of p ≤ 0.05. PCA was used to determine the structures between these water quality variables at each measurement point. As a result, a theoretical model was obtained that describes the regularities in the relationships between the indicators. PCA has shown that biogenic indicators have the strongest inﬂuence on the water quality in the Mała Panew. Lastly, the differences between the averages of the water quality indicators of the inﬂowing and of the outﬂowing water were considered and their signiﬁcance was analyzed. PCA unveiled structure and complexity of interconnections between river ﬂow and water quality. The paper shows that such statistical methods can be valuable tools for developing suitable water management strategies for the catchment and the reservoir itself.
The paper presents preliminary results of investigations on a relationship between turbidity and other quality parameters in the SBR plant effluent. The laboratory tests demonstrated a high correlation between an effluent turbidity and a total suspended solids (TSS) concentration as well as between TSS and COD. Such a relationship would help to continuously monitor and control quality of a wastewater discharge using turbidity measurement.
Nutrient pollution such as nitrate (NO3−) can cause water quality degradation in rivers used as a source of drinking water. This situation raises the question of how the nutrients have moved depending on many factors such as land use and anthropogenic sources. Researchers developed several nutrient export coefficient models depending on the aforementioned factors. To this purpose, statistical data including a number of factors such as historical water quality and land use data for the Melen Watershed were used. Nitrate export coefficients are estimates of the total load or mass of nitrate (NO3−) exported from a watershed standardized to unit area and unit time (e.g. kg/km2/day). In this study, nitrate export coefficients for the Melen Watershed were determined using the model that covers the Frequentist and Bayesian approaches. River retention coefficient was determined and introduced into the model as an important variable.
People spend most of their time in indoor environments and, consequently, these environments are more significant for the contribution of the daily pollutant exposure than outdoors. In case of children, a great part of their time is spent at school. Therefore, evaluations of this microenvironment are important to assess their time-weighted exposure to air pollutants. The aim of this study was to assess the children exposure to bioaerosols at schools from two different types of areas, urban and rural. A methodology based upon passive sampling was applied to evaluate fungi, bacteria and pollens, simultaneously with active sampling for fungi and bacterial assessment. Results showed very good correlations between sampling methods, especially for summer season. Passive sampling methodologies presented advantages such as no need of specific and expensive equipment, and they allow achieving important qualitative information. The study was conducted in different periods of the year to study the seasonal variation of the bioaerosols. Fungi and pollen presented higher levels during the summer time whereas bacteria did not present a seasonal variation. Indoor to outdoor ratios were determined to assess the level of outdoor contamination upon the indoor environment. Levels of fungi were higher outdoor and bacteria presented higher concentrations indoors. Indoor levels of bioaerosols were assessed in primary schools of urban and rural areas, using the active method along with a passive sampling method. Very good correlations between methods were found which allow the use of the passive sampling method to supply important and reliable qualitative information of bioaerosols concentrations in indoor environments. Seasonal variation in bioaerosols concentrations were found for fungi and pollen. Concentrations of fungi and bacteria above AMV (Acceptable Maximum Value) were found for most of the studied classrooms showing the importance of this microenvironment for the high exposure of children to bioaerosols.
In the study suitability of water quality index approach and environmetric methods in fi ngerprinting heavy metal pollution as well as comparison of spatial variability of multiple contaminants in surface water were assessed in the case of The Gediz River Basin, Turkey. Water quality variables were categorized into two classes using factor and cluster analysis. Furthermore, soil contamination index was adapted to water pollution index and used to fi nd out the relative relationship between the reference standards and the current situation of heavy metal contamination in water. Results revealed that surface water heavy metal content was mainly governed by metal processing, textile and tannery industries in the region. On the other hand, metal processing industry discharges mainly degraded quality of water in Kemalpasa and Menemen. Furthermore, Kemalpasa region has been heavily affected from tannery and textile industries effl uents. Moreover, pollution parameters have not been infl uenced by changes in physical factors (discharge and temperature). This study indicated the effectiveness of water quality index approach and statistical tools in fi ngerprinting of pollution and comparative assessment of water quality. Both methods can assist decision makers to determine priorities in management practices.
The aim of this article is to evaluate the quality of the Danube River in its course through Serbia as well as to demonstrate the possibilities for using three statistical methods: Principal Component Analysis (PCA), Factor Analysis (FA) and Cluster Analysis (CA) in the surface water quality management. Given that the Danube is an important trans-boundary river, thorough water quality monitoring by sampling at different distances during shorter and longer periods of time is not only ecological, but also a political issue. Monitoring was carried out at monthly intervals from January to December 2011, at 17 sampling sites. The obtained data set was treated by multivariate techniques in order, firstly, to identify the similarities and differences between sampling periods and locations, secondly, to recognize variables that affect the temporal and spatial water quality changes and thirdly, to present the anthropogenic impact on water quality parameters.
In order to predict the distribution of shrinkage porosity in steel ingot efficiently and accurately, a criterion R√L and a method to obtain its threshold value were proposed. The criterion R√L was derived based on the solidification characteristics of steel ingot and pressure gradient in the mushy zone, in which the physical properties, the thermal parameters, the structure of the mushy zone and the secondary dendrite arm spacing were all taken into consideration. The threshold value of the criterion R√L was obtained with combination of numerical simulation of ingot solidification and total solidification shrinkage rate. Prediction of the shrinkage porosity in a 5.5 ton ingot of 2Cr13 steel with criterion R√L>0.21 m･℃1/2･s -3/2 agreed well with the results of experimental sectioning. Based on this criterion, optimization of the ingot was carried out by decreasing the height-to-diameter ratio and increasing the taper, which successfully eliminated the centreline porosity and further proved the applicability of this criterion.
The aim of this publication is to present practical application of the R. Kolman’s quality rating method used in the evaluation of aluminium alloys. The results of studies of the mechanical and physical properties of the three selected test materials are discussed. To find the best material, the quality level of each of the tested materials was assessed using quality ratings proposed by R. Kolman. The results of the conducted analysis have proved that the best material was an AKII MM alloy, i.e. a casting AK11 aluminium alloy from the 4XXX series.
Based on the publications regarding new or recent measurement systems for the tokamak plasma experiments, it can be found that the monitoring and quality validation of input signals for the computation stage is done in different, often simple, ways. In the paper is described the unique approach to implement the novel evaluation and data quality monitoring (EDQM) model for use in various measurement systems. The adaptation of the model is made for the GEM-based soft X-ray measurement system FPGA-based. The EDQM elements has been connected to the base firmware using PCI-E DMA real-time data streaming with minimal modification. As additional storage, on-board DDR3 memory has been used. Description of implemented elements is provided, along with designed data processing tools and advanced simulation environment based on Questa software.
The new legislative provisions, regulating the solid fuel trade in Poland, and the resolutions of provincial assemblies assume, inter alia, a ban on the household use of lignite fuels and solid fuels produced with its use; this also applies to coal sludge, coal flotation concentrates, and mixtures produced with their use. These changes will force the producers of these materials to find new ways and methods of their development, including their modification (mixing with other products or waste) in order to increase their attractiveness for the commercial power industry. The presented paper focuses on the analysis of coal sludge, classified as waste (codes 01 04 12 and 01 04 81) or as a by-product in the production of coals of different types. A preliminary analysis aimed at presenting changes in quality parameters and based on the mixtures of hard coal sludge (PG SILESIA) with coal dusts from lignite (pulverized lignite) (LEAG) has been carried out. The analysis of quality parameters of the discussed mixtures included the determination of the calorific value, ash content, volatile matter content, moisture content, heavy metal content (Cd, Tl, Hg, Sb, As, Pb, Cr, Co, Cu, Mn, Ni, and W), and sulfur content. The preliminary analysis has shown that mixing coal sludge with coal dust from lignite and their granulation allows a product with the desired quality and physical parameters to be obtained, which is attractive to the commercial power industry. Compared to coal sludge, granulates made of coal sludge and coal dust from lignite with or without ground dolomite have a higher sulfur content (in the range of 1–1.4%). However, this is still an acceptable content for solid fuels in the commercial power industry. Compared to the basic coal sludge sample, the observed increase in the content of individual toxic components in the mixture samples is small and it therefore can be concluded that the addition of coal dust from lignite or carbonates has no significant effect on the total content of the individual elements. The calorific value is a key parameter determining the usefulness in the power industry. The size of this parameter for coal sludge in an as received basis is in the range of 9.4–10.6 MJ/kg. In the case of the examined mixtures of coal sludge with coal dust from lignite, the calorific value significantly increases to the range of 14.0–14.5 MJ/kg (as received). The obtained values increase the usefulness in the commercial power industry while, at the same time, the requirements for the combustion of solid fuels are met to a greater extent. A slight decrease in the calorific value is observed in the case of granulation with the addition of CaO or carbonates. Taking the analyzed parameters into account, it can be concluded that the prepared mixtures can be used in the combustion in units with flue gas desulfurization plants and a nominal thermal power not less than 1 MW. At this stage of work no cost analysis was carried out.
The paper presents the results of experimental validation of a set of innovative software services supporting processes of achieving, assessing and maintaining conformance with standards and regulations. The study involved several hospitals implementing the Accreditation Standard promoted by the Polish Ministry of Health. First we introduce NOR-STA services that implement the TRUST-IT methodology of argument management. Then we describe and justify a set of metrics aiming at assessment of the effectiveness and efficiency of the services. Next we present values of the metrics that were built from the data collected. The paper concludes with giving the interpretation and discussing the results of the measurements with respect to the objectives of the validation experiment.
In renewable systems, there may be conditions that can be either network error or power transmission line and environmental conditions such as when the wind speed is unbalanced and the wind turbine is connected to the grid. In this case, the control system is not damaged and will remain stable in the power transmission system. Voltage stability studies on an independent wind turbine at fault time and after fixing the error is one of the topics that can strengthen the future of independent collections. At the time of the fault, the network current increases dramatically, resulting in a higher voltage drop. Hence the talk of fast voltage recovery during error and after fixing the error and protection of rotor and grid side converters against the fault current and also protection against rising DC voltage (which sharply increases during error) is highly regarded. So, several improvements have been made to the construction of a doubly-fed induction generator (DFIG) turbine such as: a) error detection system, b) DC link protection, c) crow bar circuit, d) block of the rotor and stator side converters, e) injecting reactive power during error, f) nonlinear control design for turbine blades, g) tuning and harmonization of controllers used to keep up the power quality and to stabilize the system output voltage in the power grid. First, the dynamic models of a wind turbine, gearbox, and DFIG are presented. Then the controllers are modeled. The results of the simulation have been validated in MATLAB/Simulink.
The paper deals with problem of optimal used automatic workplace for HPDC technology - mainly from aspects of operations sequence, efficiency of work cycle and planning of using and servicing of HPDC casting machine. Presented are possible ways to analyse automatic units for HPDC. The experimental part was focused on the rationalization of the current work cycle time for die casting of aluminium alloy. The working place was described in detail in the project. The measurements were carried out in detail with the help of charts and graphs mapped cycle of casting workplace. Other parameters and settings have been identified. The proposals for improvements were made after the first measurements and these improvements were subsequently verified. The main actions were mainly software modifications of casting center. It is for the reason that today's sophisticated workplaces have the option of a relatively wide range of modifications without any physical harm to machines themselves. It is possible to change settings or unlock some unsatisfactory parameters.
The purpose of this paper was testing suitability of the time-series analysis for quality control of the continuous steel casting process in production conditions. The analysis was carried out on industrial data collected in one of Polish steel plants. The production data concerned defective fractions of billets obtained in the process. The procedure of the industrial data preparation is presented. The computations for the time-series analysis were carried out in two ways, both using the authors’ own software. The first one, applied to the real numbers type of the data has a wide range of capabilities, including not only prediction of the future values but also detection of important periodicity in data. In the second approach the data were assumed in a binary (categorical) form, i.e. the every heat(melt) was labeled as ‘Good’ or ‘Defective’. The naïve Bayesian classifier was used for predicting the successive values. The most interesting results of the analysis include good prediction accuracies obtained by both methodologies, the crucial influence of the last preceding point on the predicted result for the real data time-series analysis as well as obtaining an information about the type of misclassification for binary data. The possibility of prediction of the future values can be used by engineering or operational staff with an expert knowledge to decrease fraction of defective products by taking appropriate action when the forthcoming period is identified as critical.
Statistical Process Control (SPC) based on the Shewhart’s type control charts, is widely used in contemporary manufacturing industry, including many foundries. The main steps include process monitoring, detection the out-of-control signals, identification and removal of their causes. Finding the root causes of the process faults is often a difficult task and can be supported by various tools, including datadriven mathematical models. In the present paper a novel approach to statistical control of ductile iron melting process is proposed. It is aimed at development of methodologies suitable for effective finding the causes of the out-of-control signals in the process outputs, defined as ultimate tensile strength (Rm) and elongation (A5), based mainly on chemical composition of the alloy. The methodologies are tested and presented using several real foundry data sets. First, correlations between standard abnormal output patterns (i.e. out-of-control signals) and corresponding inputs patterns are found, basing on the detection of similar patterns and similar shapes of the run charts of the chemical elements contents. It was found that in a significant number of cases there was no clear indication of the correlation, which can be attributed either to the complex, simultaneous action of several chemical elements or to the causes related to other process variables, including melting, inoculation, spheroidization and pouring parameters as well as the human errors. A conception of the methodology based on simulation of the process using advanced input - output regression modelling is presented. The preliminary tests have showed that it can be a useful tool in the process control and is worth further development. The results obtained in the present study may not only be applied to the ductile iron process but they can be also utilized in statistical quality control of a wide range of different discrete processes.
The aim of the paper was an attempt at applying the time-series analysis to the control of the melting process of grey cast iron in production conditions. The production data were collected in one of Polish foundries in the form of spectrometer printouts. The quality of the alloy was controlled by its chemical composition in about 0.5 hour time intervals. The procedure of preparation of the industrial data is presented, including OCR-based method of transformation to the electronic numerical format as well as generation of records related to particular weekdays. The computations for time-series analysis were made using the author’s own software having a wide range of capabilities, including detection of important periodicity in data as well as regression modeling of the residual data, i.e. the values obtained after subtraction of general trend, trend of variability amplitude and the periodical component. The most interesting results of the analysis include: significant 2-measurements periodicity of percentages of all components, significance 7-day periodicity of silicon content measured at the end of a day and the relatively good prediction accuracy obtained without modeling of residual data for various types of expected values. Some practical conclusions have been formulated, related to possible improvements in the melting process control procedures as well as more general tips concerning applications of time-series analysis in foundry production.
Statistical Process Control (SPC) based on the well known Shewhart control charts, is widely used in contemporary manufacturing industry, including many foundries. However, the classic SPC methods require that the measured quantities, e.g. process or product parameters, are not auto-correlated, i.e. their current values do not depend on the preceding ones. For the processes which do not obey this assumption the Special Cause Control (SCC) charts were proposed, utilizing the residual data obtained from the time-series analysis. In the present paper the results of application of SCC charts to a green sand processing system are presented. The tests, made on real industrial data collected in a big iron foundry, were aimed at the comparison of occurrences of out-of-control signals detected in the original data with those appeared in the residual data. It was found that application of the SCC charts reduces numbers of the signals in almost all cases It is concluded that it can be helpful in avoiding false signals, i.e. resulting from predictable factors.
The paper presents an application of advanced data-driven (soft) models in finding the most probable particular causes of missed ductile iron melts. The proposed methodology was tested using real foundry data set containing 1020 records with contents of 9 chemical elements in the iron as the process input variables and the ductile iron grade as the output. This dependent variable was of discrete (nominal) type with four possible values: ‘400/18’, ‘500/07’, ‘500/07 special’ and ‘non-classified’, i.e. the missed melt. Several types of classification models were built and tested: MLP-type Artificial Neural Network, Support Vector Machine and two versions of Classification Trees. The best accuracy of predictions was achieved by one of the Classification Tree model, which was then used in the simulations leading to conversion of the missed melts to the expected grades. Two strategies of changing the input values (chemical composition) were tried: content of a single element at a time and simultaneous changes of a selected pair of elements. It was found that in the vast majority of the missed melts the changes of single elements concentrations have led to the change from the non-classified iron to its expected grade. In the case of the three remaining melts the simultaneous changes of pairs of the elements’ concentrations appeared to be successful and that those cases were in agreement with foundry staff expertise. It is concluded that utilizing an advanced data-driven process model can significantly facilitate diagnosis of defective products and out-of-control foundry processes.
The one-dimension frequency analysis based on DFT (Discrete FT) is sufficient in many cases in detecting power disturbances and evaluating power quality (PQ). To illustrate in a more comprehensive manner the character of the signal, time-frequency analyses are performed. The most common known time-frequency representations (TFR) are spectrogram (SPEC) and Gabor Transform (GT). However, the method has a relatively low time-frequency resolution. The other TFR: Discreet Dyadic Wavelet Transform (DDWT), Smoothed Pseudo Wigner-Ville Distribution (SPWVD) and new Gabor-Wigner Transform (GWT) are described in the paper. The main features of the transforms, on the basis of testing signals, are presented.
The paper presents a practical example of improving quality and occupational safety on automated casting lines. Working conditions on the line of box moulding with horizontal mould split were analysed due to low degree of automation at the stage of cores or filters installation as well as spheroidizing mortar dosing. A simulation analysis was carried out, which was related to the grounds of introducing an automatic mortar dispenser to the mould. To carry out the research, a simulation model of a line in universal Arena software for modelling and simulation of manufacturing systems by Rockwell Software Inc. was created. A simulation experiment was carried out on a model in order to determine basic parameters of the working system. Organization and working conditions in other sections of the line were also analysed, paying particular attention to quality, ergonomics and occupational safety. Ergonomics analysis was carried out on manual cores installation workplace and filters installation workplace, and changes to these workplaces were suggested in order to eliminate actions being unnecessary and onerous for employees.
The MDCT and IntMDCT Algorithm is widely utilized is Audio coding. By lifting scheme or rounding operation IntegerMDCT is evolved from Modified Discrete Cosine Transform. This method acquire the properties of MDCT and contribute excelling invertiblity and good spectral mean .In this paper we discuss about the audio codec like AAC and FLAC using MDCT and Integer MDCT algorithm and to find which algorithm shows better Compression Ratio(CR).The confines of this task is to hybriding lossy and lossless audio codec with diminished bit rate but with finer sound quality. Certainly the quality of the audio is figure out by Subjective and Objective testing which is in terms of MOS (Mean opinion square), ABx and some of the hearing aid testing methodology like PEAQ(Perceptual Evaluation Audio Quality) and ODG(Objective Difference Grade)is followed. Execution measure, that is Compression Ratio(CR) and Sound Pressure Level (SPL) is approximated.
The use of quantitative methods, including stochastic and exploratory techniques in environmental studies does not seem to be sufficient in practical aspects. There is no comprehensive analytical system dedicated to this issue, as well as research regarding this subject. The aim of this study is to present the Eco Data Miner system, its idea, construction and implementation possibility to the existing environmental information systems. The methodological emphasis was placed on the one-dimensional data quality assessment issue in terms of using the proposed QAAH1 method - using harmonic model and robust estimators beside the classical tests of outlier values with their iterative expansions. The results received demonstrate both the complementarity of proposed classical methods solution as well as the fact that they allow for extending the range of applications significantly. The practical usefulness is also highly significant due to the high effectiveness and numerical efficiency as well as simplicity of using this new tool.