The paper presents the key-finding algorithm based on the music signature concept. The proposed music signature is a set of 2-D vectors which can be treated as a compressed form of representation of a musical content in the 2-D space. Each vector represents different pitch class. Its direction is determined by the position of the corresponding major key in the circle of fifths. The length of each vector reflects the multiplicity (i.e. number of occurrences) of the pitch class in a musical piece or its fragment. The paper presents the theoretical background, examples explaining the essence of the idea and the results of the conducted tests which confirm the effectiveness of the proposed algorithm for finding the key based on the analysis of the music signature. The developed method was compared with the key-finding algorithms using Krumhansl-Kessler, Temperley and Albrecht-Shanahan profiles. The experiments were performed on the set of Bach preludes, Bach fugues and Chopin preludes.
We talk to Assoc. Prof. Paweł Gancarczyk from the PAS Institute of Art about how early music was perceived at the time when it was being composed, what modern musicologists regard as new discoveries and how our identities are shaped by sound.
In this article some key events concerning founding Polish Section of the Audio Engineering Society were presented. In addition, the history covering International Symposia on Sound Engineering and Mastering was outlined. Also, papers contained in this issue were shortly reviewed.
An unpublished Musical by Pirandello: a polysemic and multicultural kaleidoscope – The fact that Pirandello conceived the idea of writing a Musical was well know, but the recent discovery of the actual text and the musical score, in the archive of Guido Torre Gherson, agent of the writer while he lived in Paris, has shed some light on his final years and writings. The findings are discussed in the context of his late theatrical and fictional works, such as I giganti della montagna.
We are exploring the relationship between accents and expression in piano performance. Accents are local events that attract a listener's attention and are either evident from the score (immanent) or added by the performer (performed). Immanent accents are associated with grouping (phrasing), metre, melody and harmony. In piano music, performed accents involve changes in timing, dynamics, articulation, and pedalling; they vary in amplitude, form, and duration. We analyzed the first eight bars of Chopin Prelude op. 28 n. 6. In a separate study, music theorists had marked grouping, melodic and harmonic accents on the score and estimated the importance (salience) of each. Here, we mathematically modeled timing and dynamics in the prelude in two ways using Director Musices (DM) - a software package for automatic rendering of expressive performance. The first rendering focused on phrasing following existing and tested procedures in DM. The second focused on accents - timing and dynamics in the vicinity of the accents identified by the theorists. In an informal listening test, 10 out of 12 participants (5 of 6 musicians and 5 of 6 non-musicians) preferred the accent-based formulation, and several stated that it had more variation of timing and dynamics from one phrase to the next.
In slowly flaring horns the wave fronts can be considered approximately plane and the input impedance can be calculated with the transmission line method (short cones in series). In a rapidly flaring horn the kinetic energy of transverse flow adds to the local inertance, resulting in an effective increase in length when it is located in a pressure node. For low frequencies corrections are available. These fail at higher frequencies when cross-dimensions become comparable to the wavelength, causing resonances in the cross-direction. To investigate this, the pipe radiating in outer space is modelled with a finite difference method. The outer boundaries must be fully absorbing as the walls of an anechoic chamber. To achieve this, Berenger's perfectly matched layer technique is applied. Results are presented for conical horns, they are compared with earlier published investigations on flanges. The input impedance changes when the largest cross-dimension (outer diameter of flange or diameter of the horn end) becomes comparable to half a wavelength. This effect shifts the position of higher modes in the pipe, influencing the conditions for mode locking, important for ease of playing, dynamic range and sound quality.
The main goal of this research study is focused on creating a method for loudness scaling based on categorical perception. Its main features, such as: way of testing, calibration procedure for securing reliable results, employing natural test stimuli, etc., are described in the paper and assessed against a procedure that uses 1/2-octave bands of noise (LGOB) for the loudness growth estimation. The Mann-Whitney U-test is employed to check whether the proposed method is statistically equivalent to LGOB. It is shown that loudness functions obtained in both methods are similar in the statistical context. Moreover, the band-filtered musical instrument signals are experienced as more pleasant than the narrow-band noise stimuli and the proposed test is performed in a shorter time. The method proposed may be incorporated into fitting hearing strategies or used for checking individual loudness growth functions and adapting them to the comfort level settings while listening to music.
The airflow in the mouth of an open and closed flue organ pipe of corresponding geometrical proportions is studied. The phase locked particle image velocimetry with subsequent analysis by the biorthogonal decomposition is employed in order to compare the flow mechanisms and related features. The most significant differences lie in the mean velocity distribution and rapidity of the jet lateral motion. Remarks on the pressure estimation from PIV data and its importance for the aeroacoustic source terms are made and a specific example is discussed.
The aim of this paper is to describe the process of choosing the best surround microphone technique for recording of choir with an instrumental ensemble. First, examples of multichannel microphone techniques including those used in the recording are described. Then, the assumptions and details of music recording in Radio Gdansk Studio are provided as well as the process of mixing of the multichannel recording. The extensive subjective tests were performed employing a group of sound engineers and students in order to find the most preferable recording techniques. Because the final recording is based on the mix of "direct/ambient" and "direct-sound all-around" approaches, a subjective quality evaluation was conducted and on this basis the best rated multichannel techniques were chosen. The results show that listeners might consider different factors when choosing the best rated multichannel techniques in separate tasks, as different systems were chosen in the two tests.
The impact of musical experience on results concerning sound perception in selected auditory tasks, such as pitch discrimination, pitch-timbre categorization and pitch memorization for blind and visually impaired children and teenagers is discussed. Subjects were divided into three groups: of those with no experience of music, with small musical experience and with substantial musical experience. The blind and visually impaired subjects were investigated, while sighted persons formed reference groups. To date no study has described impact of musical experience on results of such experiments for blind and visually impaired children and teenagers. Our results suggest that blind persons with musical experience may be more sensitive to frequency differences and differences in timbre between two signals as well as may have better short-term auditory memory than blind people with no musical experience. Musical experience of visually impaired persons does not necessary lead to better performance in all conducted auditory tasks.
The study makes an attempt to model a complete vibrating guitar including its non-linear features, specifically the tension-compression of truss rod and tension of strings. The purpose of such a model is to examine the influence of design parameters on tone. Most experimental studies are flawed by uncertainties introduced by materials and assembly of an instrument. Since numerical modelling of instruments allows for deterministic control over design parameters, a detailed numerical model of folk guitar was analysed and an experimental study was performed in order to simulate the excitation and measurement of guitar vibration. The virtual guitar was set up like a real guitar in a series of geometrically non-linear analyses. Balancing of strings and truss rod tension resulted in a realistic initial state of deformation, which affected the subsequent spectral analyses carried out after dynamic simulations. Design parameters of the guitar were freely manipulated without introducing unwanted uncertainties typical for experimental studies. The study highlights the importance of acoustic medium in numerical models.
This article presents a study on music genre classification based on music separation into harmonic and drum components. For this purpose, audio signal separation is executed to extend the overall vector of parameters by new descriptors extracted from harmonic and/or drum music content. The study is performed using the ISMIS database of music files represented by vectors of parameters containing music features. The Support Vector Machine (SVM) classifier and co-training method adapted for the standard SVM are involved in genre classification. Also, some additional experiments are performed using reduced feature vectors, which improved the overall result. Finally, results and conclusions drawn from the study are presented, and suggestions for further work are outlined.
In Western music culture instruments have been developed according to unique instrument acoustical features based on types of excitation, resonance, and radiation. These include the woodwind, brass, bowed and plucked string, and percussion families of instruments. On the other hand, instrument performance depends on musical training, and music listening depends on perception of instrument output. Since musical signals are easier to understand in the frequency domain than the time domain, much effort has been made to perform spectral analysis and extract salient parameters, such as spectral centroids, in order to create simplified synthesis models for musical instrument sound synthesis. Moreover, perceptual tests have been made to determine the relative importance of various parameters, such as spectral centroid variation, spectral incoherence, and spectral irregularity. It turns out that the importance of particular parameters depends on both their strengths within musical sounds as well as the robustness of their effect on perception. Methods that the author and his colleagues have used to explore timbre perception are: 1) discrimination of parameter reduction or elimination; 2) dissimilarity judgments together with multidimensional scaling; 3) informal listening to sound morphing examples. This paper discusses ramifications of this work for sound synthesis and timbre transposition.