Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 9
items per page: 25 50 75
Sort by:

Abstract

According to the European Environment Agency (EEA 2018), air quality in Poland is one of the worst in Europe. There are several sources of air pollution, but the condition of the air in Poland is primarily the result of the so-called low-stack emissions from the household sector. The main reason for the emission of pollutants is the combustion of low-quality fuels (mainly low-quality coal) and waste, and the use of obsolete heating boilers with low efficiency and without appropriate filters. The aim of the study was to evaluate the impact of measures aimed at reducing low-stack emissions from the household sector (boiler replacement, change of fuel type, and thermal insulation of buildings), resulting from environmental regulations, on the improvement of energy efficiency and the emission of pollutants from the household sector in Poland. Stochastic energy and mass balance models for a hypothetical household, which were used to assess the impact of remedial actions on the energy efficiency and emission of pollutants, have been developed. The annual energy consumption and emissions of pollutants were estimated for hypothetical households before and after the implementation of a given remedial action. The calculations, using the Monte Carlo simulation, were carried out for several thousand hypothetical households, for which the values of the technical parameters (type of residential building, residential building area, unitary energy demand for heating, type of heat source) were randomly drawn from probability distributions developed on the basis of the analysis of the domestic structure of households. The model takes the coefficients of correlation between the explanatory variables in the model into account. The obtained results were multiplied so that the number of hypothetical households was equal to 14.1 million, i.e. the real number of households in Poland. The obtained results allowed for identifying the potential for reducing the emission of pollutants such as carbon dioxide, carbon monoxide, dust, and nitrogen oxides, and improving the energy efficiency as a result of the proposed and implemented measures, aimed at reducing low-stack emission, resulting from the policy. The potential for emissions of gaseous pollutants is 94% for CO, 49% for NOx, 90% for dust, and 87% for SO2. The potential for improving the energy efficiency in households is around 42%.
Go to article

Abstract

The object of the present study is to investigate the influence of damping uncertainty and statistical correlation on the dynamic response of structures with random damping parameters in the neighbourhood of a resonant frequency. A Non-Linear Statistical model (NLSM) is successfully demonstrated to predict the probabilistic response of an industrial building structure with correlated random damping. A practical computational technique to generate first and second-order sensitivity derivatives is presented and the validity of the predicted statistical moments is checked by traditional Monte Carlo simulation. Simulation results show the effectiveness of the NLSM to estimate uncertainty propagation in structural dynamics. In addition, it is demonstrated that the uncertainty in damping indeed influences the system response with the effects being more pronounced for lightly damped structures, higher variability and higher statistical correlation of damping parameters.
Go to article

Abstract

The sustainable management of energy production and consumption is one of the main challenges of the 21st century. This results from the threats to the natural environment, including the negative impact of the energy sector on the climate, the limited resources of fossil fuels, as well as the unstability of renewable energy sources – despite the development of technologies for obtaining energy from the: sun, wind, water, etc. In this situation, the efficiency of energy management, both on the micro (dispersed energy) and macro (power system) scale, may be improved by innovative technological solutions enabling energy storage. Their effective implementation enables energy storage during periods of overproduction and its use in the case of energy shortages. These challenges cannot be overestimated. Modern science needs to solve various technological issues in the field of storage, organizational problems of enterprises producing electricity and heat, or issues related to the functioning of energy markets. The article presents the specificity of the operation of a combined heat and power plant with a heat accumulator in the electricity market while taking the parameters affected by uncertainty into account. It was pointed out that the analysis of the risk associated with energy prices and weather conditions is an important element of the decision-making process and management of a heat and power plant equipped with a cold water heat accumulator. The complexity of the issues and the number of variables to be analyzed at a given time are the reason for the use of advanced forecasting methods. The stochastic modeling methods are considered as interesting tools that allow forecasting the operation of an installation with a heat accumulator while taking the influence of numerous variables into account. The analysis has shown that the combined use of Monte Carlo simulations and forecasting using the geometric Brownian motion enables the quantification of the risk of the CHP plant’s operation and the impact of using the energy store on solving uncertainties. The applied methodology can be used at the design stage of systems with energy storage and enables carrying out the risk analysis in the already existing systems; this will allow their efficiency to be improved. The introduction of additional parameters of the planned investments to the analysis will allow the maximum use of energy storage systems in both industrial and dispersed power generation.
Go to article

Abstract

Basic gesture sensors can play a significant role as input units in mobile smart devices. However, they have to handle a wide variety of gestures while preserving the advantages of basic sensors. In this paper a user-determined approach to the design of a sparse optical gesture sensor is proposed. The statistical research on a study group of individuals includes the measurement of user-related parameters like the speed of a performed swipe (dynamic gesture) and the morphology of fingers. The obtained results, as well as other a priori requirements for an optical gesture sensor were further used in the design process. Several properties were examined using simulations or experimental verification. It was shown that the designed optical gesture sensor provides accurate localization of fingers, and recognizes a set of static and dynamic hand gestures using a relatively low level of power consumption.
Go to article

Abstract

The purpose of this study is to identify relationships between the values of the fluidity obtained by computer simulation and by an experimental test in the horizontal three-channel mould designed in accordance with the Measurement Systems Analysis. Al-Si alloy was a model material. The factors affecting the fluidity varied in following ranges: Si content 5 wt.% – 12 wt.%, Fe content 0.15 wt.% – 0.3wt. %, the pouring temperature 605°C-830°C, and the pouring speed 100 g · s–1 – 400 g · s–1. The software NovaFlow&Solid was used for simulations. The statistically significant difference between the value of fluidity calculated by the equation and obtained by experiment was not found. This design simplifies the calculation of the capability of the measurement process of the fluidity with full replacement of experiments by calculation, using regression equation.
Go to article

Abstract

The paper presents a multi-scale mathematical model dedicated to a comprehensive simulation of resistance heating combined with the melting and controlled cooling of steel samples. Experiments in order to verify the formulated numerical model were performed using a Gleeble 3800 thermo-mechanical simulator. The model for the macro scale was based upon the solution of Fourier-Kirchhoff equation as regards predicting the distribution of temperature fields within the volume of the sample. The macro scale solution is complemented by a functional model generating voluminal heat sources, resulting from the electric current flowing through the sample. The model for the micro-scale, concerning the grain growth simulation, is based upon the probabilistic Monte Carlo algorithm, and on the minimization of the system energy. The model takes into account the forming mushy zone, where grains degrade at the melting stage – it is a unique feature of the micro-solution. The solution domains are coupled by the interpolation of node temperatures of the finite element mesh (the macro model) onto the Monte Carlo cells (micro model). The paper is complemented with examples of resistance heating results and macro- and micro-structural tests, along with test computations concerning the estimation of the range of zones with diverse dynamics of grain growth.
Go to article

Abstract

In this work, a fast 32-bit one-million-channel time interval spectrometer is proposed based on field programmable gate arrays (FPGAs). The time resolution is adjustable down to 3.33 ns (= T, the digitization/discretization period) based on a prototype system hardware. The system is capable to collect billions of time interval data arranged in one million timing channels. This huge number of channels makes it an ideal measuring tool for very short to very long time intervals of nuclear particle detection systems. The data are stored and updated in a built-in SRAM memory during the measuring process, and then transferred to the computer. Two time-to-digital converters (TDCs) working in parallel are implemented in the design to immune the system against loss of the first short time interval events (namely below 10 ns considering the tests performed on the prototype hardware platform of the system). Additionally, the theory of multiple count loss effect is investigated analytically. Using the Monte Carlo method, losses of counts up to 100 million events per second (Meps) are calculated and the effective system dead time is estimated by curve fitting of a non-extendable dead time model to the results (τNE = 2.26 ns). An important dead time effect on a measured random process is the distortion on the time spectrum; using the Monte Carlo method this effect is also studied. The uncertainty of the system is analysed experimentally. The standard deviation of the system is estimated as ± 36.6 × T (T = 3.33 ns) for a one-second time interval test signal (300 million T in the time interval).
Go to article

Abstract

The paper considers the modeling and estimation of the stochastic frontier model where the error components are assumed to be correlated and the inefficiency error is assumed to be autocorrelated. The multivariate Farlie-Gumble-Morgenstern (FGM) and normal copula are used to capture both the contemporaneous and the temporal dependence between, and among, the noise and the inefficiency components. The intractable multiple integrals that appear in the likelihood function of the model are evaluated using the Halton sequence based Monte Carlo (MC) simulation technique. The consistency and the asymptotic efficiency of the resulting simulated maximum likelihood (SML) estimators of the present model parameters are established. Finally, the application of model using the SML method to the real life US airline data shows significant noise-inefficiency dependence and temporal dependence of inefficiency.
Go to article

Abstract

The paper deals with application of the Gumbel model to evaluation of the environmental loads. According to recommendations of Eurocodes, the conventional method of determining return period and characteristic values of loads utilizes the theory of extremes and implicitly assumes that the cumulative distribution function of the annual or other basic period extremes is the Gumbel distribution. However, the extreme value theory shows that the distribution of extremes asymptotically approaches the Gumbel distribution when the number of independent observations in each observation period from which the maximum is abstracted increases to infinity. Results of calculations based on simulation show that in practice the rate of convergence is very slow and significantly depends on the type of parent results distribution, values of coefficient of variation, and number of observation periods. In this connection, a straightforward purely empirical method based on fitting a curve to the observed extremes is suggested.
Go to article

This page uses 'cookies'. Learn more