The frictional resistance coefficient of ventilation of a roadway in a coal mine is a very important technical parameter in the design and renovation of mine ventilation. Calculations based on empirical formulae and field tests to calculate the resistance coefficient have limitations. An inversion method to calculate the mine ventilation resistance coefficient by using a few representative data of air flows and node pressures is proposed in this study. The mathematical model of the inversion method is developed based on the principle of least squares. The measured pressure and the calculated pressure deviation along with the measured flow and the calculated flow deviation are considered while defining the objective function, which also includes the node pressure, the air flow, and the ventilation resistance coefficient range constraints. The ventilation resistance coefficient inversion problem was converted to a nonlinear optimisation problem through the development of the model. A genetic algorithm (GA) was adopted to solve the ventilation resistance coefficient inversion problem. The GA was improved to enhance the global and the local search abilities of the algorithm for the ventilation resistance coefficient inversion problem.
Raman spectrometers are devices which enable fast and non-contact identification of examined chemicals. These devices utilize the Raman phenomenon to identify unknown and often illicit chemicals (e.g. drugs, explosives) without the necessity of their preparation. Now, Raman devices can be portable and therefore can be more widely used to improve security at public places. Unfortunately, Raman spectra measurements is a challenge due to noise and interferences present outside the laboratories. The design of a portable Raman spectrometer developed at the Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology is presented. The paper outlines sources of interferences present in Raman spectra measurements and signal processing techniques required to reduce their influence (e.g. background removal, spectra smoothing). Finally, the selected algorithms for automated chemicals classification are presented. The algorithms compare the measured Raman spectra with a reference spectra library to identify the sample. Detection efficiency of these algorithms is discussed and directions of further research are outlined.
The analysis of effectiveness of the gradient algorithm for the two-dimension steady state heat transfer problems is being performed. The three gradient algorithms - the BCG (biconjugate gradient algorithm), the BICGSTAB (biconjugate gradient stabilized algorithm), and the CGS (conjugate gradient squared algorithm) are implemented in a computer code. Because the first type boundary conditions are imposed, it is possible to compare the results with the analytical solution. Computations are carried out for different numerical grid densities. Therefore it is possible to investigate how the grid density influences the efficiency of the gradient algorithms. The total computational time, residual drop and the iteration time for the gradient algorithms are additionally compared with the performance of the SOR (successive over-relaxation) method.
The primary importance of the paper is the application of the efficient formulation for the simulation of open-loop lightweight robotic manipulator. The framework employed in the paper makes use of the spatial operator algebra and the associated equations are expressed in joint space. This compact representation of the manipulator dynamics makes it possible to solve the robot forward and inverse dynamics problems in a recursive and fast manner. In the current form, the presented algorithm can be applied for the dynamics simulation of an open-loop chain system possessing any number of joints. Specifically, the formulation has been successfully applied for the analysis of the 7DOF KUKA LWR robot. Results from a number of test cases for the robot demonstrate the verification of the calculations.
The presented paper concerns CFD optimization of the straight-through labyrinth seal with a smooth land. The aim of the process was to reduce the leakage flow through a labyrinth seal with two fins. Due to the complexity of the problem and for the sake of the computation time, a decision was made to modify the standard evolutionary optimization algorithm by adding an approach based on a metamodel. Five basic geometrical parameters of the labyrinth seal were taken into account: the angles of the seal’s two fins, and the fin width, height and pitch. Other parameters were constrained, including the clearance over the fins. The CFD calculations were carried out using the ANSYS-CFX commercial code. The in-house optimization algorithm was prepared in the Matlab environment. The presented metamodel was built using a Multi-Layer Perceptron Neural Network which was trained using the Levenberg-Marquardt algorithm. The Neural Network training and validation were carried out based on the data from the CFD analysis performed for different geometrical configurations of the labyrinth seal. The initial response surface was built based on the design of the experiment (DOE). The novelty of the proposed methodology is the steady improvement in the response surface goodness of fit. The accuracy of the response surface is increased by CFD calculations of the labyrinth seal additional geometrical configurations. These configurations are created based on the evolutionary algorithm operators such as selection, crossover and mutation. The created metamodel makes it possible to run a fast optimization process using a previously prepared response surface. The metamodel solution is validated against CFD calculations. It then complements the next generation of the evolutionary algorithm.
In this paper, a new Multi-Layer Perceptron Neural Network (MLP NN) classifier is proposed for classifying sonar targets and non-targets from the acoustic backscattered signals. Besides the capabilities of MLP NNs, it uses Back Propagation (BP) and Gradient Descent (GD) for training; therefore, MLP NNs face with not only impertinent classification accuracy but also getting stuck in local minima as well as lowconvergence speed. To lift defections, this study uses Adaptive Best Mass Gravitational Search Algorithm (ABGSA) to train MLP NN. This algorithm develops marginal disadvantage of the GSA using the bestcollected masses within iterations and expediting exploitation phase. To test the proposed classifier, this algorithm along with the GSA, GD, GA, PSO and compound method (PSOGSA) via three datasets in various dimensions will be assessed. Assessed metrics include convergence speed, fail probability in local minimum and classification accuracy. Finally, as a practical application assumed network classifies sonar dataset. This dataset consists of the backscattered echoes from six different objects: four targets and two non-targets. Results indicate that the new classifier proposes better output in terms of aforementioned criteria than whole proposed benchmarks.
Rockburst is a common engineering geological hazard. In order to evaluate rockburst liability in kimberlite at an underground diamond mine, a method combining generalized regression neural networks (GRNN) and fruit fly optimization algorithm (FOA) is employed. Based on two fundamental premises of rockburst occurrence, depth, σθ, σc, σt, B1, B2, SCF, Wet are determined as indicators of rockburst, which are also input vectors of GRNN model. 132 groups of data obtained from rockburst cases from all over the world are chosen as training samples to train the GRNN model; FOA is used to seek the optimal parameter σ that generates the most accurate GRNN model. The trained GRNN model is adopted to evaluate burst liability in kimberlite pipes. The same eight rockburst indicators are acquired from lab tests, mine site and FEM model as test sample features. Evaluation results made by GRNN can be confirmed by a rockburst case at this mine. GRNN do not require any prior knowledge about the nature of the relationship between the input and output variables and avoid analyzing the mechanism of rockburst, which has a bright prospect for engineering rockburst potential evaluation.
The material presents a real problem inherent in the management of computer systems, namely that of finding the appropriate system settings and thus being able to achieve the expected perfor- mance. The material also presents a prototype which aims to adapt the system in such a way as to achieve the objective, defined as the application efficiency. The prototype uses a resource-oriented mechanism that is built into the OS Workload Manager and is focused on a proposed goal-oriented subsystem based on fuzzy logic, managing resources to make the best use of them, and pursuing translation to the use of system resources, including nondeterministic technology-related factors such as duration of allocation and release of the resources, sharing the resources with the uncapped mode, and the errors of performance measurement.
In the paper a frequency method of filtering airborne laser data is presented. A number of algorithms developed to remove objects above a terrain (buildings, vegetation etc.) in order to obtain the terrain surface were presented in literature. Those all methods published are based on geometrical criteria, i.e. on a specific threshold of elevation differences between two neighbouring points or groups of points. In other words, topographical surface is described in a spatial domain. The proposed algorithm operates on topographical surface described in a frequency domain. Two major tools, i.e. Fast Fourier Transform (FFT) and digital filters are used. The principal assumption is based on the idea that low frequencies are responsible for a terrain surface, while high frequencies are connected to objects above the terrain. The general guidelines of this method were for the first time presented at (Marmol and Jachimski, 2004). Due to the fact that the preliminary results showed some limitations, two-stage filtering algorithm has been introduced. The frequency filter was modified in such a manner that different filter parameters are used to detect buildings than those to recognize vegetation. In the first stage of data processing the filtering concerning elimination of points connected with urban areas was applied. The low-pass filter with parameters determined for urban area was used for the whole tested terrain in that stage. The purpose of the second stage was to eliminate vegetation by using the filter for forest areas. The presented method was tested by using data sets obtained in the ISPRS test on extracting DTM from point clouds. The results of using the two-stage algorithm were com- pared with both reference data and with filtering results of eight method reported to ISPRS test. A numerical comparison of the filter output with a reference data set shows that the filter generates DTM of a satisfactory quality. The accuracy of DTM produced by the frequency algorithm fits the average accuracy of eight methods reported in the ISPRS test.
The fundamental concepts of nano and quantum systems of informatics have been presented. The nanotechnological processes taking place in biological systems of informatics have been discussed in terms of informatics. Presented analysis shows that the application of nanotechnologies in the technical informatic systems enables realization of processes for formation of products and objects with self-replication feature, similarly to the processes existing in biological informatic systems. It seems also that the quantum technologies enable further miniaturization of the technical systems of informatics as well as make the execution time of some computing processes like, e.g. Shor's and Grover's algorithms, shorter.
Evolutionary computing and algorithms are well known tools of optimisation that are utilized for various areas of analogue electronic circuits design and diagnosis. This paper presents the possibility of using two evolutionary algorithms - genetic algorithm and evolutionary strategies - for the purpose of analogue circuits yield and cost optimisation. Terms: technologic and parametric yield are defined. Procedures of parametric yield optimisation, such as a design centring, a design tolerancing, a design centring with tolerancing, are introduced. Basics of genetic algorithm and evolutionary strategies are presented, differences between these two algorithms are highlighted, certain aspects of implementation are discussed. Effectiveness of both algorithms in parametric yield optimisation has been tested on several examples and results have been presented. A share of evolutionary algorithms computation cost in a total optimisation cost is analyzed.
This research presents a comparative study for maximum power point tracking (MPPT) methodologies for a photovoltaic (PV) system. A novel hybrid algorithm golden section search assisted perturb and observe (GSS-PO) is proposed to solve the problems of the conventional PO (CPO). The aim of this new methodology is to boost the efficiency of the CPO. The new algorithm has a very low convergence time and a very high efficiency. GSS-PO is compared with the intelligent nature-inspired multi-verse optimization (MVO) algorithm by a simulation validation. The simulation study reveals that the novel GSS-PO outperforms MVO under uniform irradiance conditions and under a sudden change in irradiance.
The paper presents an identification procedure of electromagnetic parameters for an induction motor equivalent circuit including rotor deep bar effect. The presented proce- dure employs information obtained from measurement realised under the load curve test, described in the standard PN-EN 60034-28: 2013. In the article, the selected impedance frequency characteristics of the tested induction machines derived from measurement have been compared with the corresponding characteristics calculated with the use of the adopted equivalent circuit with electromagnetic parameters determined according to the presented procedure. Furthermore, the characteristics computed on the basis of the classical machine T-type equivalent circuit, whose electromagnetic parameters had been identified in line with the chosen methodologies reported in the standards PN-EN 60034-28: 2013 and IEEE Std 112TM-2004, have been included in the comparative analysis as well. Additional verification of correctness of identified electromagnetic parameters has been realised through comparison of the steady-state power factor-slip and torque-slip characteristics determined experimentally and through the machine operation simulations carried out with the use of the considered equivalent circuits. The studies concerning induction motors with two types of rotor construction – a conventional single cage rotor and a solid rotor manufactured from magnetic material – have been presented in the paper.
The paper presents a new elastic scheduling task model which has been used in the uniprocessor node of a control measuring system. This model allows the selection of a new set of periods for the occurrence of tasks executed in the node of a system in the case when it is necessary to perform additional aperiodic tasks or there is a need to change the time parameters of existing tasks. Selection of periods is performed by heuristic algorithms. This paper presents the results of the experimental use of an elastic scheduling model with a GRASP heuristic algorithm.
This paper presents results of evolutionary minimisation of peak-to-peak value of a multi-tone signal. The signal is the sum of multiple tones (channels) with constant amplitudes and frequencies combined with variable phases. An exemplary application is emergency broadcasting using widely used analogue broadcasting techniques: citizens band (CB) or VHF FM commercial broadcasting. The work presented illustrates a relatively simple problem, which, however, is characterised by large combinatorial complexity, so direct (exhaustive) search becomes completely impractical. The process of minimisation is based on genetic algorithm (GA), which proves its usability for given problem. The final result is a significant reduction of peak-to-peak level of given multi-tone signal, demonstrated by three real-life examples.
In the areas of acoustic research or applications that deal with not-precisely-known or variable conditions, a method of adaptation to the uncertainness or changes is usually necessary. When searching for an adaptation algorithm, it is hard to overlook the least mean squares (LMS) algorithm. Its simplicity, speed of computation, and robustness has won it a wide area of applications: from telecommunication, through acoustics and vibration, to seismology. The algorithm, however, still lacks a full theoretical analysis. This is probabely the cause of its main drawback: the need of a careful choice of the step size - which is the reason why so many variable step size flavors of the LMS algorithm has been developed. This paper contributes to both the above mentioned characteristics of the LMS algorithm. First, it shows a derivation of a new necessary condition for the LMS algorithm convergence. The condition, although weak, proved useful in developing a new variable step size LMS algorithm which appeared to be quite different from the algorithms known from the literature. Moreover, the algorithm proved to be effective in both simulations and laboratory experiments, covering two possible applications: adaptive line enhancement and active noise control.
The article describes the problem of selection of heat treatment parameters to obtain the required mechanical properties in heat- treated bronzes. A methodology for the construction of a classification model based on rough set theory is presented. A model of this type allows the construction of inference rules also in the case when our knowledge of the existing phenomena is incomplete, and this is situation commonly encountered when new materials enter the market. In the case of new test materials, such as the grade of bronze described in this article, we still lack full knowledge and the choice of heat treatment parameters is based on a fragmentary knowledge resulting from experimental studies. The measurement results can be useful in building of a model, this model, however, cannot be deterministic, but can only approximate the stochastic nature of phenomena. The use of rough set theory allows for efficient inference also in areas that are not yet fully explored.