Search results

Filters

  • Journals
  • Keywords
  • Date

Search results

Number of results: 9
items per page: 25 50 75
Sort by:

Abstract

Fractal analysis is one of the rapidly evolving branches of mathematics and finds its application in different analyses such as pore space description. It constitutes a new approach to the issue of their natural irregularity and roughness. To be properly applied, it should be encompassed by an error estimation. The article presents and verifies uncertainties along with imperfections connected with image analysis and expands on the possible ways of their correction. One of key aspects of such research is finding both appropriate place and the number of photos to take. A coarse- grained sandstone thin section was photographed and then pictures were combined into one, bigger image. Fractal parameters distributions show their change and suggest that the accurately gathered group of photos include both highly and less porous regions. Their amount should be representative and adequate to the sample. The resolution influence on the fractal dimension and lacunarity values was examined. For SEM limestone images obtained using backscattered electrons, magnification in the range of 120x to 2000x was used. Additionally, a single pore was examined. The acquired results point to the fact that the values of fractal dimension are similar to a wide range of magnifications, while lacunarity changes each time. This is connected with changing homogeneity of the image. The article also undertakes a problem of determining fractal parameters spatial distribution based on binarization. The available methods assume that it is carried out after or before the image division into rectangles to create fractal dimension and lacunarity values for interpolation. An individual binarization, although time consuming, provides better results that resemble reality to a closer degree. It is not possible to define a single, correct methodology of error elimination. A set of hints has been presented that can improve results of further image analysis of pore space.
Go to article

Abstract

Effectiveness of operation of a weapon stabilization system is largely dependent on the choice of a sensor, i.e. an accelerometer. The paper identifies and examines fundamental errors of piezoelectric accelerometers and offers measures for their reduction. Errors of a weapon stabilizer piezoelectric sensor have been calculated. The instrumental measurement error does not exceed 0.1 × 10−5 m/s2. The errors caused by the method of attachment to the base, different noise sources and zero point drift can be mitigated by the design features of piezoelectric sensors used in weapon stabilizers.
Go to article

Abstract

This research presents comprehensive assessment of the precision castings quality made in the Replicast CS process. The evaluation was made based on quality of the surface layer, shape errors and the accuracy of the linear dimensions. Studies were carried out on the modern equipment, among other things a Zeiss Calypso measuring machine and profilometer were used. Obtained results allowed comparing lost wax process models and Replicast CS process.
Go to article

Abstract

This article analyzes the technology of creating and updating a digital topographic map using the method of mapping (generalization) on an updated map with a scale of 1 : 25;000 based on the source cartographic material. The main issue in the creation of digital maps is the study of map production accuracy and error analysis arising from the process of map production. When determining the quality of a digital map, the completeness and accuracy of object and terrain mapping are evaluated. The correctness of object identification, the logical consistency of the structure, the and representation of objects are assessed. The main and the most effective method, allowing to take into account displacement errors for the relief during image processing, is orthotransformation, but the fragment used to update the digital topographic map needs additional verification of its compliance with the scale requirements of the map. Instrumental survey will help to clearly identify areas of space image closer to nadir points and to reject poor quality material. The software used for building geodetic control network should provide stable results of accuracy regardless on the scale of mapping, the physical and geographical conditions of the work area or the conditions of aerial photography.
Go to article

Abstract

DNA sequencing remains one of the most important problems in molecular and computational biology. One of the methods used for this purpose is sequencing by hybridization. In this approach usually DNA chips composed of a full library of oligonucleotides of a given length are used, but in principle it is possible to use another types of chips. Isothermic DNA chips, being one of them, when used for sequencing may reduce hybridization error rate. However, it was not clear if a number of errors following from subsequence repetitions is also reduced in this case. In this paper a method for estimating resolving power of isothermic DNA chips is described which allows for a comparison of such chips and the classical ones. The analysis of the resolving power shows that the probability of sequencing errors caused by subsequence repetitions is greater in the case of isothermic chips in comparison to their classical counterparts of a similar cardinality. This result suggests that isothermic chips should be chosen carefully since in some cases they may not give better results than the classical ones.
Go to article

Abstract

Time domain analysis is used to determine whether A/D converters that employ higher order sigma-delta modulators, widely used in digital acoustic systems, have superior performance over classical synchronous A/D converters with modulators of first order when taking into account their important metrological property which is the magnitude of the quantization error. It is shown that the quantization errors of delta-sigma A/D converters with higher order modulators are exactly on the same level as for converters with a first order modulator.
Go to article

Abstract

From the theory of reliability it follows that the greater the observational redundancy in a network, the higher is its level of internal reliability. However, taking into account physical nature of the measurement process one may notice that the planned additional observations may increase the number of potential gross errors in a network, not raising the internal reliability to the theoretically expected degree. Hence, it is necessary to set realistic limits for a sufficient number of observations in a network. An attempt to provide principles for finding such limits is undertaken in the present paper. An empirically obtained formula (Adamczewski 2003) called there the law of gross errors, determining the chances that a certain number of gross errors may occur in a network, was taken as a starting point in the analysis. With the aid of an auxiliary formula derived on the basis of the Gaussian law, the Adamczewski formula was modified to become an explicit function of the number of observations in a network. This made it possible to construct tools necessary for the analysis and finally, to formulate the guidelines for determining the upper-bounds for internal reliability indices. Since the Adamczewski formula was obtained for classical networks, the guidelines should be considered as an introductory proposal requiring verification with reference to modern measuring techniques.
Go to article

Abstract

This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database
Go to article

This page uses 'cookies'. Learn more