The NTAV/SPA 2012 conference was held on 27–29th September 2012 and was organized by the Institute of Electronics, Lodz University of Technology (www.eletel.p.lodz.pl) with the support of the IEEE Polish Section Region 8, Polish Section of the Audio Engineering Society, Department of Acoustics, Wroclaw University of Technology and the Division of Signal Processing and Electronic Systems, Poznan University of Technology.
Sonification is defined as presentation of information by means of non-speech audio. In assistive technologies for the blind, sonification is most often used in electronic travel aids (ETAs) - devices which aid in independent mobility through obstacle detection or help in orientation and navigation. The presented review contains an authored classification of various sonification schemes implemented in the most widely known ETAs. The review covers both those commercially available and those in various stages of research, according to the input used, level of signal processing algorithm used and sonification methods. Additionally, a sonification approach developed in the Naviton project is presented. The prototype utilizes stereovision scene reconstruction, obstacle and surface segmentation and spatial HRTF filtered audio with discrete musical sounds and was successfully tested in a pilot study with blind volunteers in a controlled environment, allowing to localize and navigate around obstacles.
Keypoint detection is a basic step in many computer vision algorithms aimed at recognition of objects, automatic navigation and analysis of biomedical images. Successful implementation of higher level image analysis tasks, however, is conditioned by reliable detection of characteristic image local regions termed keypoints. A large number of keypoint detection algorithms has been proposed and verified. In this paper we discuss the most important keypoint detection algorithms. The main part of this work is devoted to description of a keypoint detection algorithm we propose that incorporates depth information computed from stereovision cameras or other depth sensing devices. It is shown that filtering out keypoints that are context dependent, e.g. located at boundaries of objects can improve the matching performance of the keypoints which is the basis for object recognition tasks. This improvement is shown quantitatively by comparing the proposed algorithm to the widely accepted SIFT keypoint detector algorithm. Our study is motivated by a development of a system aimed at aiding the visually impaired in space perception and object identification.
An electronic system and an algorithm for estimating pedestrian geographic location in urban terrain is reported in the paper. Different sources of kinematic and positioning data are acquired (i.e.: accelerometer, gyroscope, GPS receiver, raster maps of terrain) and jointly processed by a Monte-Carlo simulation algorithm based on the particle filtering scheme. These data are processed and fused to estimate the most probable geographical location of the user. A prototype system was designed, built and tested with a view to aiding blind pedestrians. It was shown in the conducted field trials that the method yields superior results to sole GPS readouts. Moreover, the estimated location of the user can be effectively sustained when GPS fixes are not available (e.g. tunnels).