In this paper, a new Multi-Layer Perceptron Neural Network (MLP NN) classifier is proposed for classifying sonar targets and non-targets from the acoustic backscattered signals. Besides the capabilities of MLP NNs, it uses Back Propagation (BP) and Gradient Descent (GD) for training; therefore, MLP NNs face with not only impertinent classification accuracy but also getting stuck in local minima as well as lowconvergence speed. To lift defections, this study uses Adaptive Best Mass Gravitational Search Algorithm (ABGSA) to train MLP NN. This algorithm develops marginal disadvantage of the GSA using the bestcollected masses within iterations and expediting exploitation phase. To test the proposed classifier, this algorithm along with the GSA, GD, GA, PSO and compound method (PSOGSA) via three datasets in various dimensions will be assessed. Assessed metrics include convergence speed, fail probability in local minimum and classification accuracy. Finally, as a practical application assumed network classifies sonar dataset. This dataset consists of the backscattered echoes from six different objects: four targets and two non-targets. Results indicate that the new classifier proposes better output in terms of aforementioned criteria than whole proposed benchmarks.
The material presents a real problem inherent in the management of computer systems, namely that of finding the appropriate system settings and thus being able to achieve the expected perfor- mance. The material also presents a prototype which aims to adapt the system in such a way as to achieve the objective, defined as the application efficiency. The prototype uses a resource-oriented mechanism that is built into the OS Workload Manager and is focused on a proposed goal-oriented subsystem based on fuzzy logic, managing resources to make the best use of them, and pursuing translation to the use of system resources, including nondeterministic technology-related factors such as duration of allocation and release of the resources, sharing the resources with the uncapped mode, and the errors of performance measurement.
In the areas of acoustic research or applications that deal with not-precisely-known or variable conditions, a method of adaptation to the uncertainness or changes is usually necessary. When searching for an adaptation algorithm, it is hard to overlook the least mean squares (LMS) algorithm. Its simplicity, speed of computation, and robustness has won it a wide area of applications: from telecommunication, through acoustics and vibration, to seismology. The algorithm, however, still lacks a full theoretical analysis. This is probabely the cause of its main drawback: the need of a careful choice of the step size - which is the reason why so many variable step size flavors of the LMS algorithm has been developed. This paper contributes to both the above mentioned characteristics of the LMS algorithm. First, it shows a derivation of a new necessary condition for the LMS algorithm convergence. The condition, although weak, proved useful in developing a new variable step size LMS algorithm which appeared to be quite different from the algorithms known from the literature. Moreover, the algorithm proved to be effective in both simulations and laboratory experiments, covering two possible applications: adaptive line enhancement and active noise control.