Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 14
items per page: 25 50 75
Sort by:

Abstract

The accuracy of vehicle speed measured by a speedometer is analysed. The stress on the application of skew normal distribution is laid. The accuracy of measured vehicle speed depends on many error sources: construction of speedometer, measurement method, model inadequacy to real physical process, transferring information signal, external conditions, production process technology etc. The errors of speedometer are analysed in a complex relation to errors of the speed control gauges, whose functionality is based on the Doppler effect. Parameters of the normal distribution and skew normal distribution were applied in the errors analysis. It is shown that the application of maximum permissible errors to control the measuring results of vehicle speed gives paradoxical results when, in the case of skew normal distribution, the standard deviations of higher vehicle speeds are smaller than the standard deviations of lower speeds. In the case of normal distribution a higher speed has a greater standard deviation. For the speed measurements by Doppler speed gauges it is suggested to calculate the vehicle weighted average speed instead of the arithmetic average speed, what will correspond to most real dynamic changes of the vehicle speed parameters.
Go to article

Abstract

Fractal analysis is one of the rapidly evolving branches of mathematics and finds its application in different analyses such as pore space description. It constitutes a new approach to the issue of their natural irregularity and roughness. To be properly applied, it should be encompassed by an error estimation. The article presents and verifies uncertainties along with imperfections connected with image analysis and expands on the possible ways of their correction. One of key aspects of such research is finding both appropriate place and the number of photos to take. A coarse- grained sandstone thin section was photographed and then pictures were combined into one, bigger image. Fractal parameters distributions show their change and suggest that the accurately gathered group of photos include both highly and less porous regions. Their amount should be representative and adequate to the sample. The resolution influence on the fractal dimension and lacunarity values was examined. For SEM limestone images obtained using backscattered electrons, magnification in the range of 120x to 2000x was used. Additionally, a single pore was examined. The acquired results point to the fact that the values of fractal dimension are similar to a wide range of magnifications, while lacunarity changes each time. This is connected with changing homogeneity of the image. The article also undertakes a problem of determining fractal parameters spatial distribution based on binarization. The available methods assume that it is carried out after or before the image division into rectangles to create fractal dimension and lacunarity values for interpolation. An individual binarization, although time consuming, provides better results that resemble reality to a closer degree. It is not possible to define a single, correct methodology of error elimination. A set of hints has been presented that can improve results of further image analysis of pore space.
Go to article

Abstract

Effectiveness of operation of a weapon stabilization system is largely dependent on the choice of a sensor, i.e. an accelerometer. The paper identifies and examines fundamental errors of piezoelectric accelerometers and offers measures for their reduction. Errors of a weapon stabilizer piezoelectric sensor have been calculated. The instrumental measurement error does not exceed 0.1 × 10−5 m/s2. The errors caused by the method of attachment to the base, different noise sources and zero point drift can be mitigated by the design features of piezoelectric sensors used in weapon stabilizers.
Go to article

Abstract

This research presents comprehensive assessment of the precision castings quality made in the Replicast CS process. The evaluation was made based on quality of the surface layer, shape errors and the accuracy of the linear dimensions. Studies were carried out on the modern equipment, among other things a Zeiss Calypso measuring machine and profilometer were used. Obtained results allowed comparing lost wax process models and Replicast CS process.
Go to article

Abstract

This article analyzes the technology of creating and updating a digital topographic map using the method of mapping (generalization) on an updated map with a scale of 1 : 25;000 based on the source cartographic material. The main issue in the creation of digital maps is the study of map production accuracy and error analysis arising from the process of map production. When determining the quality of a digital map, the completeness and accuracy of object and terrain mapping are evaluated. The correctness of object identification, the logical consistency of the structure, the and representation of objects are assessed. The main and the most effective method, allowing to take into account displacement errors for the relief during image processing, is orthotransformation, but the fragment used to update the digital topographic map needs additional verification of its compliance with the scale requirements of the map. Instrumental survey will help to clearly identify areas of space image closer to nadir points and to reject poor quality material. The software used for building geodetic control network should provide stable results of accuracy regardless on the scale of mapping, the physical and geographical conditions of the work area or the conditions of aerial photography.
Go to article

Abstract

Time domain analysis is used to determine whether A/D converters that employ higher order sigma-delta modulators, widely used in digital acoustic systems, have superior performance over classical synchronous A/D converters with modulators of first order when taking into account their important metrological property which is the magnitude of the quantization error. It is shown that the quantization errors of delta-sigma A/D converters with higher order modulators are exactly on the same level as for converters with a first order modulator.
Go to article

Abstract

DNA sequencing remains one of the most important problems in molecular and computational biology. One of the methods used for this purpose is sequencing by hybridization. In this approach usually DNA chips composed of a full library of oligonucleotides of a given length are used, but in principle it is possible to use another types of chips. Isothermic DNA chips, being one of them, when used for sequencing may reduce hybridization error rate. However, it was not clear if a number of errors following from subsequence repetitions is also reduced in this case. In this paper a method for estimating resolving power of isothermic DNA chips is described which allows for a comparison of such chips and the classical ones. The analysis of the resolving power shows that the probability of sequencing errors caused by subsequence repetitions is greater in the case of isothermic chips in comparison to their classical counterparts of a similar cardinality. This result suggests that isothermic chips should be chosen carefully since in some cases they may not give better results than the classical ones.
Go to article

Abstract

The single-phase voltage loss is a common fault. Once the voltage-loss failure occurs, the amount of electrical energy will not be measured, but it is to be calculated so as to protect the interest of the power supplier. Two automatic calculation methods, the power substitution and the voltage substitution, are introduced in this paper. Considering the lack of quantitative analysis of the calculation error of the voltage substitution method, the grid traversal method and MATLAB tool are applied to solve the problem. The theoretical analysis indicates that the calculation error is closely related to the voltage unbalance factor and the power factor, and the maximum calculation error is about 6% when the power system operates normally. To verify the theoretical analysis, two three-phase electrical energy metering devices have been developed, and verification tests have been carried out in both the lab and field conditions. The lab testing results are consistent with the theoretical ones, and the field testing results show that the calculation errors are generally below 0.2%, that is correct in most cases.
Go to article

Abstract

This paper researches the application of grey system theory in cost forecasting of the coal mine. The grey model (GM(1.1)) is widely used in forecasting in business and industrial systems with advantages of minimal data, a short time and little fluctuation. Also, the model fits exponentially with increasing data more precisely than other prediction techniques. However, the traditional GM(1.1) model suffers from the poor anti-interference ability. Aimed at the flaws of the conventional GM(1.1) model, this paper proposes a novel dynamic forecasting model with the theory of background value optimization and Fourier-series residual error correction based on the traditional GM(1.1) model. The new model applies the golden segmentation optimization method to optimize the background value and Fourier-series theory to extract periodic information in the grey forecasting model for correcting the residual error. In the proposed dynamic model, the newest data is gradually added while the oldest is removed from the original data sequence. To test the new model’s forecasting performance, it was applied to the prediction of unit costs in coal mining, and the results show that the prediction accuracy is improved compared with other grey forecasting models. The new model gives a MAPE & C value of 0.14% and 0.02, respectively, compared to 1.75% and 0.37 respectively for the traditional GM(1.1) model. Thus, the new GM(1.1) model proposed in this paper, with advantages of practical application and high accuracy, provides a new method for cost forecasting in coal mining, and then help decision makers to make more scientific decisions for the mining operation.
Go to article

Abstract

From the theory of reliability it follows that the greater the observational redundancy in a network, the higher is its level of internal reliability. However, taking into account physical nature of the measurement process one may notice that the planned additional observations may increase the number of potential gross errors in a network, not raising the internal reliability to the theoretically expected degree. Hence, it is necessary to set realistic limits for a sufficient number of observations in a network. An attempt to provide principles for finding such limits is undertaken in the present paper. An empirically obtained formula (Adamczewski 2003) called there the law of gross errors, determining the chances that a certain number of gross errors may occur in a network, was taken as a starting point in the analysis. With the aid of an auxiliary formula derived on the basis of the Gaussian law, the Adamczewski formula was modified to become an explicit function of the number of observations in a network. This made it possible to construct tools necessary for the analysis and finally, to formulate the guidelines for determining the upper-bounds for internal reliability indices. Since the Adamczewski formula was obtained for classical networks, the guidelines should be considered as an introductory proposal requiring verification with reference to modern measuring techniques.
Go to article

Abstract

This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database
Go to article

Abstract

The secretiveness of sonar operation can be achieved by using continuous frequency-modulated sounding signals with reduced power and significantly prolonged repeat time. The application of matched filtration in the sonar receiver provides optimal conditions for detection against the background of white noise and reverberation, and a very good resolution of distance measurements of motionless targets. The article shows that target movement causes large range measurement errors when linear and hyperbolic frequency modulations are used. The formulas for the calculation of these errors are given. It is shown that for signals with linear frequency modulation the range resolution and detection conditions deteriorate. The use of hyperbolic frequency modulation largely eliminates these adverse effects.
Go to article

Abstract

Freeform surfaces have wider engineering applications. Designers use B-splines, Non-Uniform Rational B-splines, etc. to represent the freeform surfaces in CAD, while the manufacturers employ machines with controllers based on approximating functions or splines. Different errors also creep in during machining operations. Therefore the manufactured freeform surfaces have to be verified for conformance to design specification. Different points on the surface are probed using a coordinate measuring machine and substitute geometry of surface established from the measured points is compared with the design surface. The sampling points are distributed according to different strategies. In the present work, two new strategies of distributing the points on the basis of uniform surface area and dominant points are proposed, considering the geometrical nature of the surfaces. Metrological aspects such as probe contact and margins to be provided along the sides have also been included. The results are discussed in terms of deviation between measured points and substitute surface as well as between design and substitute surfaces, and compared with those obtained with the methods reported in the literature.
Go to article

This page uses 'cookies'. Learn more