Early recognition of altered lactate levels is considered a useful prognostic indicator in dis- ease detection for both human beings and animals. It is reasonable therefore to hypothesize that a portable, point of care (POC) spectrophotometric device for analysis of lactate levels, may have an application for field veterinarians across a range of conditions and diagnostic procedures. In this study, a total of 72 cattle in the transition period underwent POC spectrophotometric lactate measurement with a portable device (The Vet Photometer) in the field, with a small portion of blood used for comparative ELISA evaluation. Lactate measurements were compared using a of Passing-Bablok regression analysis and Bland-Altman plots. The Vet Photometer lactate mea- surement results were in agreement with those generated by the ELISA method. Values for the agreement were derived, in a 95% CI between -1.3 and 0.99, and a positive correlation (r=0.71) between the two measurements. The equation y= 0.68x + 0.60 was achieved using a Pass- ing-Bablok regression analysis. There were no statistical differences in mean values between the measurement methods. In conclusion, a novel veterinary POC spectrophotometric device “Vet Photometer” is an accurate device for evaluation of lactate levels in healthy transition cows.
There is a general agreement that remembering depends not only on the memory processes as such but rather that encoding, storage and retrieval are under the constant influence of the overarching, metacognitive processes. Moreover, many interventions designed to improve memory refer in fact to metacognition. Most attempts to integrate the very different theoretical and experimental approaches in this domain focus on encoding, whereas there is relatively little integration of approaches that focus on retrieval. Therefore, we reviewed the studies that used new ideas to improve memory retrieval due to a “metacognitive intervention”. We concluded that whereas single experimental manipulations were not likely to increase metacognitive ability, more extensive interventions were. We proposed possible theoretical perspectives, namely the Source Monitoring Framework, as a means to integrate the two, so far separate, ways of thinking about the role of metacognition in retrieval: the model of strategic regulation of memory, and the research on appraisals in autobiographical memory. We identified venues for future research which could address, among other issues, integration of these perspectives.
The aim of the research was to analyze the possibility of using mobile laser scanning systems to acquire information for production and/or updating of a basic map and to propose a no-reference index of this accuracy assessment. Point clouds have been analyzed in terms of content of interpretation and geometric potential. For this purpose, the accuracy of point clouds with a georeference assigned to the base map objects was examined. In order to conduct reference measurements, a geodetic network was designed and also additional static laser scanning data has been used. The analysis of mobile laser scanning (MLS) data accuracy was conducted with the use of 395 check points. In the paper, application of the total Error of Position of the base-map Objects acquired with the use of MLS was proposed. Research results were related to reference total station measurements. The resulting error values indicate the possibility to use an MLS point cloud in order to accurately determine coordinates for individual objects for the purposes of standard surveying studies, e.g. for updating some elements of the base map content. Nevertheless, acquiring MLS point clouds with satisfying accuracy not always is possible, unless specific resolution condition is fulfilled. The paper presents results of accuracy evaluation in different classes of base-map elements and objects.
A checkweigher is an automatic machine to measure the weight of in-motion products. It is usually located around the end of the production process and ensures the weight of a product within specified limits. Any products are taken out of line if their weights are out of the specified limits. It is usually equipped with an optical device. It is used to make a trigger to set the time duration to allow a product to move completely on the weigh belt for sampling the weight. In this paper, a new method of mass measurement for checkweighers is proposed which uses just signal processing without the optical device. The effectiveness of the method is shown through experiments. Also a possibility of faster estimation of weight is shown.
The tendencies of modern industry are to increase the quality of manufactured products, simultaneously decreasing production time and cost. The hybrid system combines advantages of the high accuracy of contact CMM and the high measurement speed of non-contact structured light optical techniques. The article describes elements of a developed system together with the steps of the measurement process of the hybrid system, with emphasis on segmentation algorithms. Additionally, accuracy determination of such a system realized with the help of a specially designed ball-plate measurement standard is presented.
The interesting properties of a class of expanding systems are discussed. The operation of the considered systems can be described as follows: the input signal is processed by a linear dynamic converter in subsequent time intervals, each of them is equal to Ti. Processing starts at the moments n · Ti, always after zeroing of converter initial conditions. For smooth input signals and a given transfer function of the converter one can suitably choose Ti and the gain coefficient in order to realize the postulated linear operations on input signals, which is quite different comparing it to the operation realized by the converter. The errors of postulated operations are mainly caused by non-smooth components of the input signal. The principles for choice of system parameters and rules for system optimization are presented in the paper. The referring examples are attached too.
The TerraSAR-X add-on for Digital Elevation Measurement ( TanDEM-X) mission launched in 2010 is another programme – after the Shuttle Radar Topography Mission (SRTM) in 2000 – that uses space-borne radar interferometry to build a global digital surface model. This article presents the accuracy assessment of the TanDEM-X intermediate Digital Elevation Model (IDEM) provided by the German Aerospace Center (DLR) under the project “Accuracy assessment of a Digital Elevation Model based on TanDEM-X data” for the southwestern territory of Poland. The study area included: open terrain, urban terrain and forested terrain. Based on a set of 17,498 reference points acquired by airborne laser scanning, the mean errors of average heights and standard deviations were calculated for areas with a terrain slope below 2 degrees, between 2 and 6 degrees and above 6 degrees. The absolute accuracy of the IDEM data for the analysed area, expressed as a root mean square error (Total RMSE), was 0.77 m.
The objective of research concerned verifying the accuracy of the location and shape of selected lakes presented on topographical maps from various periods, drawn up on different scales. The area of research covered lakes situated in North- Western Poland on the Międzychód-Sieraków Lakeland . An analysis was performed of vector maps available in both analogue and digital format. The scales of these studies range from 1:50 000 to 1:10 000. The source materials were current for the years 1907 through 2013. The shape and location of lakes have been verified directly by means of field measurements performed using the GPS technology with an accuracy class of RTK. An analysis was performed of the location and shape of five lakes. The analysed water regions were vectorised, and their vector images were used to determine quantitative features: the area and length of the shoreline. Information concerning the analysed lakes obtained from the maps was verified on the basis of direct field measurements performed using a GPS RTK receiver. Use was made of georeferential corrections provided by the NAVGEO service or a virtual reference station generated by the ASG EUPOS system. A compilation of cartographic and field data formed the basis for a comparison of the actual area and the length of the shoreline of the studied lakes. Cartographic analyses made it possible to single out the most reliable cartographic sources, which could be used for the purposes of hydrographical analyses. The course of shorelines shows the attached map.
This research presents comprehensive assessment of the precision castings quality made in the Replicast CS process. The evaluation was made based on quality of the surface layer, shape errors and the accuracy of the linear dimensions. Studies were carried out on the modern equipment, among other things a Zeiss Calypso measuring machine and profilometer were used. Obtained results allowed comparing lost wax process models and Replicast CS process.
This paper provides analyses of the accuracy and convergence time of the PPP method using GPS systems and different IGS products. The official IGS products: Final, Rapid and Ultra Rapid as well as MGEX products calculated by the CODE analysis centres were used. In addition, calculations with weighting function of the observations were carried out, depending on the elevation angle. The best results were obtained for CODE products, with a 5-minute interval precision ephemeris and precise corrections to satellite clocks with a 30-second interval. For these calculations the accuracy of position determination was at the level of 3 cm with a convergence time of 44 min. Final and Rapid products, which were orbit with a 15-minute interval and clock with a 5 minute interval, gave very similar results. The same level of accuracy was obtained for calculations with CODE products, for which both precise ephemeris and precise corrections to satellite clocks with the interval of 5 minutes. For these calculations, the accuracy was 4 cm with the convergence time of 70 min. The worst accuracy was obtained for calculations with Ultra-rapid products, with an interval of 15 minutes. For these calculations, the accuracy was 10 cm with a convergence time of 120 min. The use of the weighting function improved the accuracy of position determination in each case, except for calculations with Ultra-rapid products. The use of this function slightly increased the convergence time, in addition to the CODE calculation, which was reduced to 9 min.
The paper presents the results of research on the possibilities of fixing ship position coordinates based on results of surveying bearings on navigational marks with the use of the CCD camera. Accuracy of the determination of ship position coordinates, expressed in terms of the mean error, was assumed to be the basic criterion of this estimation. The first part of the paper describes the method of the determination of the resolution and the mean error of the angle measurement, taken with a camera, and also the method of the determination of the mean error of position coordinates when two or more bearings were measured. There have been defined three software applications assigned for the development of navigational sea charts with accuracy areas mapped on. The second part contains the results of studying accuracy in fixing ship position coordinates, carried out in the Gulf of Gdansk, with the use of bearings taken obtained with the Rolleiflex and Sony cameras. The results are presented in a form of diagrams of the mean error of angle measurement, also in the form of navigational charts with accuracy fields mapped on. In the final part, basing on results obtained, the applicability of CCD cameras in automation of coastal navigation performance process is discussed.
According to metrological guidelines and specific legal requirements, every smart electronic electricity meter has to be constantly verified after pre-defined regular time intervals. The problem is that in most cases these pre-defined time intervals are based on some previous experience or empirical knowledge and rarely on scientifically sound data. Since the verification itself is a costly procedure it would be advantageous to put more effort into defining the required verification periods. Therefore, a fixed verification interval, recommended by various internal documents, standardised evaluation procedures and national legislation, could be technically and scientifically more justified and consequently more appropriate and trustworthy for the end user. This paper describes an experiment to determine the effect of alternating temperature and humidity and constant high current on a smart electronic electricity meter’s measurement accuracy. Based on an analysis of these effects it is proposed that the current fixed verification interval could be revised, taking into account also different climatic influence. The findings of this work could influence a new standardized procedure in respect of a meter’s verification interval.
The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested
A parcel is the most important object of real estate cadastre. Its primary spatial attribute are boundaries, determining the extent of property rights. Capturing the data on boundaries should be performed in the way ensuring sufficiently high accuracy and reliability. In recent years, as part of the project “ZSIN – Construction of Integrated Real Estate Information System – Stage I”, in the territories of the participating districts, actions were taken aimed at the modernization of the register of land and buildings. In many cases, this process was carried out basing on photogrammetric materials. Applicable regulations allow such a possibility. This paper, basing on the documentation from the National Geodetic and Cartographic Documentation Center and on the authors’ own surveys attempts to assess the applicability of the photogrammetric method to capture data on the boundaries of cadastral parcels. The scope of the research, most importantly, included the problem of accuracy with which it was possible to determine the position of a boundary point using photogrammetric surveys carried out on the terrain model created from processed aerial photographs. The article demonstrates the manner of recording this information in the cadastral database, as well as the resulting legal consequences. Moreover, the level of reliability of the entered values of the selected attributes of boundary points was assessed.
The paper relates to the problem of adaptation of V-block methods to waviness measurements of cylindrical surfaces. It presents the fundamentals of V-block methods and the principle of their application. The V-block methods can be successfully used to measure the roundness and waviness deviations of large cylinders used in paper industry, shipping industry, or in metallurgy. The concept of adaptation of the V-block method to waviness measurements of cylindrical surfaces was verified using computer simulations and experimental work. The computer simulation was carried out in order to check whether the proposed mathematical model and V-block method parameters are correct. Based on the simulation results, a model of measuring device ROL-2 for V-block waviness measurements was developed. Next, experimental research was carried out consisting in evaluation of waviness deviation, initially using a standard non-reference measuring device, and then using the tested device based on the V-block method. Finally, accuracy of the V-block experimental method was calculated.
The paper presents the application of liquid crystal thermography for temperature determination and visualisation of two phase flow images on the studied surface. Properties and applications of thermochromic liquid crystals are discussed. Liquid crystals were applied for two-dimensional detection of the temperature of the heating foil forming one of the surfaces of the minichannel along which the cooling liquid flowed. The heat flux supplied to the heating surface was altered in the investigation and it was accompanied by a change in the color distribution on the surface. The accuracy of temperature measurements on the surface with liquid crystal thermography is estimated. The method of visualisation of two-phase flow structures is described. The analysis of monochrome images of flow structures was employed to calculate the void fraction for some cross-sections. The flow structure photos were processed using Corel graphics software and binarized. The analysis of phase volumes employed Techsystem Globe software. The measurement error of void fraction is estimated.
This paper presents a low-cost and smart measurement system to acquire and analyze mechanical motion parameters. The measurement system integrates several measuring nodes that include one or more triaxial accelerometers, a temperature sensor, a data acquisition unit and a wireless communication unit. Particular attention was dedicated to measurement system accuracy and compensation of measurement errors caused by power supply voltage variations, by temperature variations and by accelerometers’ misalignments. Mathematical relationships for error compensation were derived and software routines for measurement system configuration, data acquisition, data processing, and self-testing purposes were developed. The paper includes several simulation and experimental results obtained from an assembled prototype based on a crank-piston mechanism
Stealth is a frequent requirement in military applications and involves the use of devices whose signals are difficult to intercept or identify by the enemy. The silent sonar concept was studied and developed at the Department of Marine Electronic Systems of the Gdansk University of Technology. The work included a detailed theoretical analysis, computer simulations and some experimental research. The results of the theoretical analysis and computer simulation suggested that target detection and positioning accuracy deteriorate as the speed of the target increases, a consequence of the Doppler effect. As a result, more research and measurements had to be conducted to verify the initial findings. To ensure that the results can be compared with those from the experimental silent sonar model, the target's actual position and speed had to be precisely controlled. The article presents the measurement results of a silent sonar model looking at its detection, range resolution and problems of incorrect positioning of moving targets as a consequence of the Doppler effect. The results were compared with those from the theoretical studies and computer simulations.
This paper is devoted to a detailed experimentally based analysis of applicability of vector network analyzers for measuring impedance of surface mount inductors with and without DC bias. The measurements are made using custommade bias tees and a test fixture with an ordinary vector network analyzer. The main attention in the analysis is focused on measurement accuracy of an impedance of surface mount inductors. Measurement results obtained with a vector network analyzer will also be compared to those obtained by using an impedance analyzer based on auto-balancing bridge method.
Computer-aided tools help in shortening and eradicating numerous repetitive tasks that reduces the gap between digital model and actual product. Use of these tools assists in realizing free-form objects such as custom fit products as described by a stringent interaction with the human body. Development of such a model presents a challenging situation for reverse engineering (RE) which is not analogous with the requirement for generating simple geometric models. Hence, an alternating way of producing more accurate three-dimensional models is proposed. For creating accurate 3D models, point clouds are processed through filtering, segmentation, mesh smoothing and surface generation. These processes help in converting the initial unorganized point data into a 3D digital model and simultaneously influence the quality of model. This study provides an optimum balance for the best accuracy obtainable with maximum allowable deviation to lessen computer handling and processing time. A realistic non trivial case study of free-form prosthetic socket is considered. The accuracy obtained for the developed model is acceptable for the use in medical applications and FEM analysis.
This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database
The paper presents the problem of assessing the accuracy of reconstructing free-form surfaces in the CMM/CAD/CAM/CNC systems. The system structure comprises a coordinate measuring machine (CMM) PMM 12106 equipped with a contact scanning probe, a 3-axis Arrow 500 Vertical Machining Center, QUINDOS software and Catia software. For the purpose of surface digitalization, a radius correction algorithm was developed. The surface reconstructing errors for the presented system were assessed and analysed with respect to offset points. The accuracy assessment exhibit error values in the reconstruction of a free-form surface in a range of ± 0.02 mm, which, as it is shown by the analysis, result from a systematic error.
Measurement data obtained from Weigh-in-Motion systems support protection of road pavements from the adverse phenomenon of vehicle overloading. For this protection to be effective, WIM systems must be accurate and obtain a certificate of metrological legalization. Unfortunately there is no legal standard for accuracy assessment of Weigh-in-Motion (WIM) systems. Due to the international range of road transport, it is necessary to standardize methods and criteria applied for assessing such systems’ accuracy. In our paper we present two methods of determining accuracy of WIM systems. Both are based on the population of weighing errors determined experimentally during system testing. The first method is called a reliability characteristic and was developed by the authors. The second method is based on determining boundaries of the tolerance interval for weighing errors. Properties of both methods were assessed on the basis of simulation studies as well as experimental results obtained from a 16-sensor WIM system.