Search results

Filters

  • Journals
  • Keywords
  • Date

Search results

Number of results: 10
items per page: 25 50 75
Sort by:

Abstract

A gigantic amounts of data and information on molecules that constitute the very complex cell machinery have been collected, classified and stored in data banks. Although we posses enormous amount of knowledge about the properties and functions of thousands of molecular entities, we are still far from understanding how they do work in a living cell. It is clear now that these molecules (genes, proteins) are not autonomous, that there is no direct linear relation between genotype and phenotype, and that the majority of functions are carried and executed by concerted molecular activity, and that the majority of diseases are multifactorial. A basic property of the matter in a living cell (both normal and pathologic) is an interaction between variety of macromolecules, mainly proteins, genes (DNA) etc. In a process of self-organization they are able to form an active molecular biologic system – a complex, labile and dynamic network which integrity is secured by non-covalent bounds. In this essay some basic properties of network structure and the universal rules that govern them are described. Network or system biology is promising new research approach in biology and medicine.
Go to article

Abstract

The following paper provides an insight into application of the contemporary heuristic methods to graph coloring problem. Variety of algorithmic solutions for the Graph Coloring Problem (GCP) are discussed and recommendations for their implementation provided. The GCP is the NP-hard problem, aiming at finding the minimum number of colors for vertices in such a way that none of two adjacent vertices are marked with the same color. With the advent of modern processing units metaheuristic approaches to solve GCP were extended to discrete optimization here. To explain the phenomenon of these methods, a thorough survey of AI-based algorithms for GCP is provided, with the main differences between specific techniques pointed out.
Go to article

Abstract

The process of railway track adjustment is a task which includes bringing, in geometrical terms, the actual track axis to the position ensuring safe and efficient traffic of rail vehicles. The initial calculation stage of this process is to determine approximately the limits of sections of different geometry, i.e. straight lines, arcs and transition curves. This allows to draw up a draft alignment design, which is subject to control the position relative to the current state. In practice, this type of a project rarely meets the requirements associated with the values of corrective alignments. Therefore, it becomes necessary to apply iterated correction of a solution in order to determine the final project, allowing to introduce minor corrections while maintaining the assumed parameters of the route. The degree of complexity of this process is defined by the quality of determining a preliminary draft alignment design. Delimitation of the sections for creation of creating such a design, is usually done by using the curvature diagram (InRail v8.7 Reference Guide [1], Jamka et al [2], Strach [3]), which is, however, sensitive to the misalignment of the track and measurement errors. In their paper Lenda and Strach [4] proposed a new method for creating curvature diagram, based on approximating spline function, theoretically allowing, inter alia, to reduce vulnerability to interference factors. In this study, the method to determine a preliminary draft alignment design for the track with severe overexploitation was used, and thus in the conditions adversely affecting the accuracy of the conducted readings. The results were compared to the ones obtained using classical curvature diagram. The obtained results indicate that the method allows to increase the readability of a curvature graph, which at considerable deregulation of a track takes an irregular shape, difficult to interpret. The method also favourably affects the accuracy of determining the initial parameters of the project, reducing the entire process of calculation.
Go to article

Abstract

The K4 graph and the inertia of the adjacency matrix for a connected planar graph. A substantial history exists about incorporating matrix analysis and graph theory into geography and the geospatial sciences. This study contributes to that literature, aiding in analyses of spatial relationships, especially in terms of spatial weights matrices. We focus on the n-by-n 0–1 binary adjacency matrix, whose rows and columns represent the nodes of a connected planar graph. The inertia of this matrix represents the number of positive (n+), negative (n−), and zero (n0) eigenvalues. Approximating the Jacobian term of spatial auto-normal models can benefit from calculating these matrix quantities. We establish restrictions for n- exploiting properties we uncover for the K4 graph.
Go to article

Abstract

Confidential algorithm for the approximate graph vertex covering problem is presented in this article. It can preserve privacy of data at every stage of the computation, which is very important in context of cloud computing. Security of our solution is based on fully homomorphic encryption scheme. The time complexity and the security aspects of considered algorithm are described.
Go to article

Abstract

Speech and music signals are multifractal phenomena. The time displacement profile of speech and music signal show strikingly different scaling behaviour. However, a full complexity analysis of their frequency and amplitude has not been made so far. We propose a novel complex network based approach (Visibility Graph) to study the scaling behaviour of frequency wise amplitude variation of speech and music signals over time and then extract their PSVG (Power of Scale freeness of Visibility Graph). From this analysis it emerges that the scaling behaviour of amplitude-profile of music varies a lot from frequency to frequency whereas it’s almost consistent for the speech signal. Our left auditory cortical areas are proposed to be neurocognitively specialised in speech perception and right ones in music. Hence we can conclude that human brain might have adapted to the distinctly different scaling behaviour of speech and music signals and developed different decoding mechanisms, as if following the so called Fractal Darwinism. Using this method, we can capture all non-stationary aspects of the acoustic properties of the source signal to the deepest level, which has huge neurocognitive significance. Further, we propose a novel non-invasive application to detect neurological illness (here autism spectrum disorder, ASD), using the quantitative parameters deduced from the variation of scaling behaviour for speech and music.
Go to article

Abstract

Fault detection and location are important and front-end tasks in assuring the reliability of power electronic circuits. In essence, both tasks can be considered as the classification problem. This paper presents a fast fault classification method for power electronic circuits by using the support vector machine (SVM) as a classifier and the wavelet transform as a feature extraction technique. Using one-against-rest SVM and one-against-one SVM are two general approaches to fault classification in power electronic circuits. However, these methods have a high computational complexity, therefore in this design we employ a directed acyclic graph (DAG) SVM to implement the fault classification. The DAG SVM is close to the one-against-one SVM regarding its classification performance, but it is much faster. Moreover, in the presented approach, the DAG SVM is improved by introducing the method of Knearest neighbours to reduce some computations, so that the classification time can be further reduced. A rectifier and an inverter are demonstrated to prove effectiveness of the presented design.
Go to article

Abstract

This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT). Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.
Go to article

Abstract

The goal of this paper is to explore and to provide tools for the investigation of the problems of unit-length scheduling of incompatible jobs on uniform machines. We present two new algorithms that are a significant improvement over the known algorithms. The first one is Algorithm 2 which is 2-approximate for the problem Qm|pj  = 1, G = bisubquartic|Cmax. The second one is Algorithm 3 which is 4-approximate for the problem Qm|pj  = 1, G = bisubquartic|ΣCj, where m ∈ {2, 3, 4}. The theory behind the proposed algorithms is based on the properties of 2-coloring with maximal coloring width, and on the properties of ideal machine, an abstract machine that we introduce in this paper.
Go to article

This page uses 'cookies'. Learn more