Login

 


WSEAS Transactions on Signal Processing


Print ISSN: 1790-5052
E-ISSN: 2224-3488

Volume 9, 2013


Issue 1, Volume 9, January 2013


Title of the Paper: Research on LASCA and Denoising of Blood Vessel images of Small Animals

Authors: Cong Wu, Nengyun Feng, Koichi Harada, Pengcheng Li

Abstract: A denoising approach is proposed that based on the combination of wiener filtering, order-statistic filtering and wavelet, which denoise the LASCA(LAser Speckle Contrast Analysis)image of blood vessels in small animal. The approach first performs laser spectral contrast analysis on cerebral blood flow and brain blood flow in rat, get their spatial and temporal contrast images. Then, a denoising filtering method is proposed to deal with noise in LASCA. The image restoration is achieved by applying the proposed admixture filtering,and the subjective estimation and objective estimation are given to the denoising images.

Keywords: Cerebral blood flow, Brain blood flow, Wiener filtering, Order-statistic filtering, Wavelet, Laser speckle contrast imaging, Hybrid filtering


Title of the Paper: Neural based Domain and Range Pool Partitioning using Fractal Coding for Nearly Lossless Medical Image Compression

Authors: S. Bhavani, K.Thanushkodi

Abstract: This work results from a fractal image compression based on iterated transforms and machine learning modeling. In this work an improved quasi-losses fractal coding scheme is addressed to preserve the rich features of the medical image as the domain blocks and to generate the remaining part of the image from it based on fractal transformations. Machine learning based model is used for improving the performance of the fractal coding scheme and also to reduce the encoding computational complexity. The performance of the proposed algorithm is evaluated in terms of compression ratio, PSNR and encoding computation time, with standard fractal coding for MRI image datasets of size 512?512 over various thresholds. The results show the increase in encoding speed, outperforming some of the currently existing methods thereby ensuring the possibility of using fractal based image compression algorithms for medical image compression.

Keywords: Image compression, Iterated transforms, Fractal image compression, Medical image, Fractals


Title of the Paper: A High Performance Pipelined Discrete Hilbert Transform Processor

Authors: Wang Xu, Zhang Yan, Ding Shunying

Abstract: A high performance pipelined discrete Hilbert transform (HT) processor is presented in this paper. The processor adopts fast Fourier transform (FFT) algorithm to compute discrete HT. FFT is an effectively method to compute the discrete HT, because the discrete HT can be calculated easily by multiplication with +j and -j in the frequency domain. The radix-2 FFT algorithm with decimation-in-frequency (DIF) and decimation-in-time (DIT) decomposition are both utilized to construct an efficiently discrete HT signal flow graph (SFG). Some stages in the discrete HT SFG don’t include multiplications. These stages are combined into one stage by easy swapping operations to decrease the computational latency. The discrete HT processor is composed of four types pipelined processing elements (PE). Some constant multiplications in these PEs are optimized to reduce the hardware resource. Data being processed is of fixed point mode with 16-bit word width. The pipelined discrete HT processor has the ability to simultaneously perform calculations on the current frame of data, read input data for the next frame of data, and output the results of the previous frame of data. The symmetric property of twiddle factors is utilized to decrease half size of the read-only memory (ROM). Pipelined arithmetic units (adders and multipliers) are designed to enhance the performance of the discrete HT processor. The performance analysis with some previous paper approaches show that the proposed discrete HT processor has the shortest clock latency in discrete HT computation with same samples.

Keywords: Discrete Hilbert transform, FFT, Adder, Multiplier, FPGA, VLSI


Issue 2, Volume 9, April 2013


Title of the Paper: Local Pixel Class Pattern Based on Fuzzy Reasoning for Feature Description

Authors: Weiren Shi, Shuhan Chen, Li Fang

Abstract: Local features extracted from images have broad potential in varieties of computer vision applications, such as image retrieval, object recognition and scene recognition. However, many of the existing features are not robust enough due to the existence of illumination changes, which is a common occurrence in real world applications, e.g. shadowing. In this paper, a novel feature descriptor is proposed to designing more robust to illumination changes. The basic principle of the proposed method is based on the observation that although the intensity values may be changed due to illumination changes, the texture structure or pixel class in the corresponding locations still remains unchanged. Specifically, they are achieved by applying Histogram Equalization and Intensity Normalization in pre-process step, and considering overall intensity distribution properties together with local intensity difference information by introducing fuzzy reasoning rules. In order to make our descriptor more discriminative and robust, we also propose a novel gradient-based weighting scheme. Experimental results on the popular Oxford dataset have shown that our proposed descriptor outperforms many state-of-the art methods not only under complex illumination changes, but also under many other image transformations.

Keywords: Local feature descriptor, Illumination invariance, Histogram Equalization, Fuzzy reasoning, Local pixel class pattern


Title of the Paper: Beamformings for Spectrum Sharing in Cognitive Radio Networks

Authors: Raungrong Suleesathira, Satit Puranachikeeree

Abstract: Cognitive radio is regarded as one the most promising technology for supporting spectrum sharing which secondary (cognitive) users coexist with users in primary network whose radio band is licensed. Two conflicting challenges are how to maintain the interferences generated by the cognitive radio network to the primary network below an acceptable threshold level while maximizing the sum-rate of the cognitive radio network. We present two beamforming methods, modified zero forcing beamforming and transmit-receive beamforming. The zero forcing beamforming is modified by adding the channel gain between the cognitive radio base station and the primary user to meet the two conflicting goals. The orthogonality of transmit beams in MIMO beamforming by Gram-Schmidt method achieves the first goal that the primary user is interference free. To satisfy the second goal, self-interference is reduced by the constrained minimization of the mean output array of cognitive receivers. To reduce complexity of the system, the number of cognitive radio users must be limited. Criteria to select the number of best cognitive radio users should increase the sum-rate of cognitiveradio network. Subspace-based scheduling scheme selects the cognitive radio users orthogonal each other as much as possible so that the self-interference is mitigated. Simulation results are given to evaluate the performances of the proposed methods in forms of bit error rates, symbol error rates and sum-rates.

Keywords: beamforming, cognitive radios, constrained optimization


Title of the Paper: Predator Prey Optimization Method For The Design Of IIR Filter

Authors: Balraj Singh, J. S. Dhillon, Y. S. Brar

Abstract: The paper develops innovative methodology for the robust and stable design of digital infinite impulse response (IIR) filters using predator-prey optimization (PPO) method. Predator-prey optimization is undertaken as a global search technique and exploratory search is exploited as a local search technique. Being a stochastic optimization procedure, PPO technique, avoids local stagnation as preys play the role of diversification in the search of optimum solution due to the fear of predator(s). Exploratory search aims to fine tune the solution locally in promising search area. The proposed PPO method enhances the capability to explore and exploit the search space locally as well globally to obtain the optimal filter design parameters. A multivariable optimization is employed as the design criterion to obtain the optimal stable IIR filter that satisfies the different performance requirements like minimizing the magnitude approximation error and minimizing the ripple magnitude. The proposed method is effectively applied to design of low-pass, high-pass, band-pass, and band-stop digital IIR filters being multivariable optimization problems. The computational experiments show that the proposed PPO method is superior or at least comparable to other algorithms and can be efficiently applied for higher order filter design.

Keywords: Digital IIR filters, Predator Prey Optimization, Exploratory search algorithm, Multi parameter optimization


Title of the Paper: Efficient Compression of CT Images with Modified Branched Inverse Pyramidal Decomposition

Authors: Roumen Kountchev, Roumiana Kountcheva

Abstract: In the paper is presented one novel approach for adaptive compression of groups of computer tomography (CT) images, based on modified branched inverse pyramid decomposition. In result is obtained high compression ratio with retained image quality. To achieve this, was used the high correlation between sequences of CT images, representing same object(s). For the processing of a group of CT images was used the Inverse Pyramid Decomposition (IPD), with a special Branched structure, called Modified ?IPD. The experimental results confirmed the efficiency of the presented method. The general characteristics of the new decomposition are a reliable basis for its successful use in various application areas for processing of similar (multi-view) or sequences of similar images. As it was proved by the experiments, this compression is very efficient when used for archiving of sequences of CT images. It could also be used for compression of still multispectral, hyper spectral or multi-view images and video sequences, obtained from surveillance video cameras, supersound scanners, thermo-vision systems, scanning microscopes, etc.

Keywords: Image processing, Archiving of CT images, Compression of sequences of medical images, Image group coding, Modified branched pyramidal image decomposition


Title of the Paper: Adaptive Approach Based on Curve Fitting and Interpolation for Boundary Effects Reduction

Authors: Hang Su, Jingsong Li

Abstract: Boundary effects are caused by incomplete data in the boundary regions when the analysis window gets closer to the edge of a signal. Various extension schemes have been developed to handle the boundaries of finite length signals to reduce the boundary effects. Zero padding, periodic extension and symmetric extension are some basic extension methods. However, it is well known that all of these solutions may have drawbacks. In this paper, we consider the problem of handling the boundary effects due to improper extension methods in the wavelet transform. An extension algorithm based on curve fitting with properties that make it more suitable for boundary effects reduction is presented here. This extension algorithm could preserve the time-varying characteristics of the signals and be effective to reduce distortions appearing at the boundary. Then, an interpolation approach is used in the boundary effects region to further alleviate the distortions. Procedures for realization of these two algorithms and relative issues are presented. Several experimental tests conducted on synthetic signals exhibiting linear and nonlinear laws are shown that the proposed algorithms are confirmed to be efficient to alleviate the boundary effects in comparison to the existing extension methods.

Keywords: Finite-length Signals, Convolution, Wavelet Transform, Boundary Effects, Fourier Series Extension, Interpolation


Title of the Paper: FPGA Implementation of 3GPP-LTE Physical Downlink Control Channel Using Diversity Techniques

Authors: S. Syed Ameer Abbas, S. J. Thiruvengadam

Abstract: Long Term Evolution (LTE) of UMTS Terrestrial Radio Access and Radio Access Network is a Fourth Generation wireless broadband technology which is capable of providing backward compatibility with 2G (Second Generation) and 3G (Third Generation) technologies. LTE is able to deliver high data rate and low latency with reduced cost This paper proposes a novel architecture for Single Input Single Output(SISO) 1x1, Multiple Input Single Output (MISO) 4x1, Multiple Input Multiple Output (MIMO)4x2 for Physical Downlink Control Channels of LTE. The physical downlink channel processing involves as scrambling, modulation, layer mapping, precoding, data mapping to resource elements at transmitter and demapping from resource elements, decoding, delayer mapping, demodulation and descrambling at receiver. In the proposed architecture, these steps are carried out in a single architecture comprises of all the data and control channels. Based on simulation and implementation, results are discussed in terms of Register Transfer Level (RTL) design, Field Programmable Gate Arrays (FPGA) editor, power estimation and resource estimation. To simulate all the modules of all control channels, ModelSim is used. For synthesis and implementation of the above architecture PlanAhead 13.2 tool on Virtex-5, xc5vlx50tff1136-1 device board is used.

Keywords: Long Term Evolution (LTE), Single Input Single Output (SISO), Multiple Input Single Output (MISO), Multiple Input Multiple Output (MIMO) Physical Downlink Shared Channel (PDSCH), Physical Broadcast Channel (PBCH) and Physical Multicast Channel (PMCH)


Issue 3, Volume 9, July 2013


Title of the Paper: Comparison and Analysis of GNSS Signal Tracking Performance based on Kalman Filter and Traditional Loop

Authors: Xingli Sun, Honglei Qin, Jingyi Niu

Abstract: As Kalman filter technology has better performance for estimation and prediction of dynamic signal, it is gradually used in GNSS signal tracking. According to the steady-state error, transfer function and equivalent noise bandwidth of Kalman filter and traditional loop in steady status, the tracking performance of these two methods is compared in theory. The theoretical analysis demonstrates that, the dynamic stress error of Kalman filter tracking is less than traditional loop. Kalman filter method can track dynamic signal accurately with small equivalent noise bandwidth. The analysis results are verified by simulation, and the simulation results show that the tracking sensitivity of Kalman filter is similar to that of the traditional loop. The Kalman filter tracking method has higher dynamics performance and better accuracy.

Keywords: GNSS, Signal Tracking, Kalman Filter, traditional loop, Phase Locked Loop, steady status


Title of the Paper: A Novel Method of Walsh-Hadamard Code Generation using Reconfigurable Lattice filter and its application in DS-CDMA System

Authors: G. Suchitra, M. L. Valarmathi

Abstract: Walsh Hadamard codes are widely used as signature codes in the current wireless standards such as IS-95 CDMA, WCDMA, CDMA2000, and image transform applications. It is also used in conjunction with error correction algorithms for Reed-Muller codes and in dyadic invariant signal processing. In this paper, a novel method of generating Walsh Hadamard(WH) codes using lattice filter is proposed. The obtained results concur with the properties of WH code. The entire WH code set can be generated by changing the reflection coefficients without modifying the filter structure. This feature ensures reconfigurability and enables them to be used in software radio. Moreover, this reconfigurable structure can be used as a matched filter for multiuser detection in Direct Sequence Code division Multiple Access System (DS-CDMA). BER performance of a DS-CDMA system employing a correlator receiver and the proposed matched filter receiver for despreading is simulated in an AWGN channel and flat fading Rayleigh channel. Simulation results showing the generation of WH codes from the autoregressive model of first order Gauss-Markov process is also obtained.

Keywords: Software defined radio, Walsh Hadamard codes, Lattice filter, Matched filter, Autoregressive model, Gauss-Markov process


Title of the Paper: Decorrelation of Multispectral Images, Based on Hierarchical Adaptive PCA

Authors: Roumen Kountchev, Roumiana Kountcheva

Abstract: In this work is presented one new approach for processing of groups of multispectral (MS) images, called Hierarchical Adaptive Principal Component Analysis (HAPCA). The aim is to decorrelate each group of N multispectral images, obtained after dividing the whole set into subgroups of 2 or 3 images each. In result, the basic part of the power of the images in one group is concentrated in a small number of eigen images only. This is achieved using the well-known method Principal Component Analysis (PCA) with transform matrix of size NN. In this case however, the method implementation needs high computational power, because it is based on iterative algorithms. Unlike it, the 2-level HAPCA permits to use transform matrices of size 33 (or 22), instead of the PCA transform matrix of size 99 (or 88 correspondingly), which makes the needed computational complexity 2 times lower in average. One more advantage of the new algorithm is that it permits parallel processing of each image sub-group in all hierarchical levels. In this work are also given some experimental results for the HAPCA algorithm applied on groups of MS images, which confirm the high decorrelation obtained. The proposed algorithm could be used as a basis for the creation of new algorithms for efficient compression of sets of MS and medical images and video sequences, for minimization of objects feature space in sequences of images, etc.

Keywords: Image processing, Image segmentation, Image contents analysis, Lossless image compression, Histogram modification, Inverse pyramid decomposition, Lossy image compression


Title of the Paper: Ear Recognition System using Radon Transform and Neural Network

Authors: C. Murukesh, K. Thanushkodi

Abstract: Ear recognition system is one among the many evolving cutting edge technologies in the field of security surveillance. This paper presents an ear recognition system based on the Radon transform combined with Principal Component Analysis (PCA) for feature extraction, and integration of Multi-class Linear Discriminant Analysis (LDA) and Self -Organizing Feature Maps (SOM) for classification. Radon transform is used to extract the directional features of an image by projection of an image matrix for different orientations. The experimental result shows that the verification of an ear recognition system tested on two different public ear databases is accurate and speed.

Keywords: Radon Transform, Principal Component Analysis, Multi-class Linear Discriminant Analysis, Self-Organizing Feature Maps


Title of the Paper: NMF based Dictionary Learning for Automatic Transcription of Polyphonic Piano Music

Authors: Giovanni Costantini, Massimiliano Todisco, Renzo Perfetti

Abstract: Music transcription consists in transforming the musical content of audio data into a symbolic representation. The objective of this study is to investigate a transcription system for polyphonic piano. The proposed method focuses on temporal musical structures, note events and their main characteristics: the attack instant and the pitch. Onset detection exploits a time-frequency representation of the audio signal. Feature extraction is based on Sparse Nonnegative Matrix Factorization (SNMF) and Constant Q Transform (CQT), while note classification is based on Support Vector Machines (SVMs). Finally, to validate our method, we present a collection of experiments using a wide number of musical pieces of heterogeneous styles.

Keywords: Music transcription, classification, nonnegative matrix factorization, constant Q transform, support vector machines


Title of the Paper: Multiple Fault Detection in Typical Automobile Engines: A Soft Computing Approach

Authors: S. N. Dandarea, S. V .Dudulb

Abstract: Fault detection has gained growing importance for vehicle safety and reliability. For the improvement of reliability, safety and efficiency; advanced methods of supervision, fault detection and fault diagnosis become increasingly important for many automobile systems. Many times, the trial and error approach has been applied to detect the fault and therefore engine may get more damaged instead of getting repaired. To alleviate such type of problem, the idea of sound recording of engines has been suggested to diagnose the fault correctly without opening the engine. In this paper, fault detection of two stroke engine, Hero Honda Passion four strokes and Maruti Suzuki Alto Automobile Engine have been proposed. The objective is to categorize the acoustic signals of engines into healthy and faulty state. Acoustic emission signals are generated from three different automobile engines in both healthy and faulty conditions. The paper proposes soft computing approach for detection of multiple faults in automobile engines which include signal conditioning, signal processing, statistical analysis and Artificial Neural Networks. The Statistical techniques and different Artificial Neural Networks have been employed to classify the faults correctly. Performance of Statistical techniques and ten types of Artificial Neural Networks have been compared on the basis of Average Classification Accuracy and finally, optimal Neural Network has been designed for the best performance.

Keywords: Artificial Neural Network, Automobile Engine, Classification Accuracy, Fault Detection and Stistical Techniques


Issue 4, Volume 9, October 2013


Title of the Paper: MPEG Video Deployment in Interactive Multimedia Systems: HEVC vs. AVC Codec Performance Study

Authors: Dragorad Milovanovic, Zoran Bojkovic

Abstract: The coding efficiency of three generations of video coding standards is compared by means of PSNR and subjective testing in interactive video applications, such as video chat, video conferencing and telepresence systems. An unified approach is applied to the analysis of designs of MPEG-H HEVC/H.265, MPEG-4 AVC/H.264, and H.263 reference software implementation. The results of performance tests for selected HD720 video sequences indicate that HEVC encoders achieve equivalent objective video quality as encoders that conform to AVC when using approximately 60% less bit rate on average. Bitrate reduction BRr and coding gain CG based on PSNR measure and complexity based on encoding/decoding time of HEVC MP vs. AVC HP vs. H.263 CHC are tested at bitrates of 0.256, 0.384, 0.512, 0.850, and 1.500 Mbps using the low-delay encoding constraints typical for real-time conversational applications. Low-delay coding are considered by selecting appropriate prediction structures and coding options in software configuration of encoders. Real-time decoding complexity where studied on personal Ultrabook x86 computer in conversational application.

Keywords: Operational configuration, coding gain, subjective quality, codec complexity


Title of the Paper: Chaotic Image Encryption via Convex Sinusoidal Map

Authors: F. Abu-Amara, I. Abdel-Qader

Abstract: A new image encryption scheme is proposed based on one-dimensional Convex-Sinusoidal Map (CSM). The dynamics of this chaotic map are analyzed and used to develop a two-phase image encryption framework. In the first phase, the image-permutation, a pseudorandom number generator is proposed to shuffle pixel locations while in the second phase, the image-substitution, CSM is utilized to modify pixel intensity values. Experimental results indicate that the Convex-Sinusoidal Map exhibits robust chaos with wide range chaotic behavior. Results also show that the proposed image encryption framework has a large-key space with complexity of order making it immune against Brute-Force Attack methods. It only requires to encrypt pixels images using Intel Core i3 with 4GB memory machine. Moreover, this algorithm provides complete obscure for plain-text images and high encryption performance.

Keywords: Image encryption, Chaotic maps, Pseudorandom number, Image substitution, Image permutation


Title of the Paper: Language and Text-Independent Speaker Identification System Using GMM

Authors: S. Selva Nidhyananthan, R. Shantha Selva Kumari

Abstract: This paper motivates the use of Dynamic Mel-Frequency Cepstral Coefficient (DMFCC) feature and combination of DMFCC and MFCC features for robust language and text-independent speaker identification. MFCC feature, modeled on the human auditory system has been the widely used feature for speaker recognition because of its less vulnerability to noise perturbation and little session variability. But the human auditory system also can sensitively perceive the pitch changes in the speech. Therefore adopting the algorithm which integrates the change in speaker specific pitch information in designing the Dynamic Mel scale filter bank exhibit improved effectiveness in speaker identification. The individual Gaussian component of Gaussian Mixture Model (GMM) represents vocal tract configurations that are effective for speaker identification. The performance of the speaker identification system is experimentally evaluated with microphone speech data base consisting of 120 speakers. The experiments examine the speaker Identification Error Rate (IDER) by testing using segments of different lengths and also using text-independent utterances in Tamil and English languages. In comparison with the identification error rate of 5.8% obtained with MFCC-based system and 2.9% with DMFCC system an error rate of 1.2% is obtained when DMFCC feature vectors are added with MFCC feature vectors to form the combined feature. Experimental results confirm that GMM is efficient for language and text – independent speaker identification.

Keywords: Speaker Identification, Mel- scale filter bank, Gaussian filters, Mel Frequency Cepstral Coefficient, Dynamic Mel Frequency Cepstral Coefficient, Gaussian Mixture Model


Title of the Paper: Analysis of Global Exponential Stability of Fuzzy BAM Neural Networks with Delays

Authors: Huayi Yin, Qianhong Zhang, Lihui Yang

Abstract: In this paper fuzzy bi-directional associative memory (BAM) neural networks with constant delays are considered. Some sufficient conditions for existence and global exponential stability of unique equilibrium point are established by using fixed point theorem and differential inequality techniques. The results obtained are easily checked to guarantee existence, uniqueness and global exponential stability of equilibrium point.

Keywords: Fuzzy BAM neural networks, Equilibrium point, Global exponential stability, Delays


Title of the Paper: A Comprehensive Study on Wavelet Based Shrinkage Methods for Denoising Natural Images

Authors: S. Sutha, E. Jebamalar Leavline, D. Asir Antony Gnana Singh

Abstract: Transmitting the information in the form of images has drawn much importance in the modern age. The images are often corrupted by various types of noises during acquisition and transmission. Such images have to be cleaned before using in any applications. Image denoising is a thirst area in image processing for decades. Wavelet transform has been an efficient tool for image representation for decades because of its simplicity, energy compaction and sparse representation. Ample of wavelet based thresholding techniques are proposed based on universal and adaptive thresholding techniques. Fixing an optimal threshold is a key factor to determine the performance of denoising algorithms. This optimal threshold shall be estimated from the image statistics for ensuring better performance of noise removal in terms of clarity (or quality of the) images. In this paper, an experimental study of the state of the art wavelet based thresholding methods is presented. The denoising performance of the wavelet based shrinkage methods are compared interms of mean square error, peak signal to noise ratio, image enhancement factor and the most recent measure namely multiscale structural similarity index.

Keywords: Image denoising, Wavelet transform, Threshold methods, Adaptive threshold, Wavelet subbands, Shrinkage methods


Title of the Paper: Wavelet LPC With Neural Network for Speaker Identification System

Authors: Khaled Daqrouq, Ali Morfeq, Mohammad Ajour, Abdulhameed Alkhateeb

Abstract: In this study, an average framing linear prediction coding (AFLPC) technique for text-independent speaker identification systems is proposed.The study of the combination of modified LPC with wavelet transform (WT), termed AFLPC, is presented for speaker identification based on our previous paper. The study procedure is based on feature extraction and voice classification. In the phase of classification, feed forward backprobagationneural network (FFBPN) is applied because of its rapid response and ease in implementation. In the practical investigation, performance of different wavelet transforms in conjunction with AFLPC were compared with one another. In addition, the capability analysis on the proposed system was examined by comparing it with other systems proposed in literature. Consequently, the FFBPNclassifier achieves a better recognition rate (97.36%) with the wavelet packet (WP) and AFLPC termed WPLPCF feature extraction method. It is also suggested to analyze the proposed system in additive white Gaussian noise (AWGN) and real noise environments.

Keywords: Speech; LPC; Average framing; Wavelet; Neural network


Title of the Paper: FPGA Implementation of 1D and 2D DWT Architecture using Modified Lifting Scheme

Authors: M. Nagabushanam, S. Ramachandran, P. Kumar

Abstract: Image compression is one of the prominent topics in image processing that plays a very important role in reducing image size for real-time transmission and storage. Many of the standards recommend the use of DWT for image compression. The computational complexity of DWT imposes a major challenge for the real-time use of DWT-based image compression algorithms. In this paper, we propose a modified lifting scheme for computing the approximation and detailed coefficients of DWT. The modified equations use, right shift operators and 6-bit multipliers. The hierarchy levels in computation are reduced to one; thereby minimizing the delay and increasing throughput. The design implemented on Virtex-5 FPGA operates at 180 MHz and consumes less than 1W of power. The design occupies less than 1% of the LUT resources on FPGA. The architecture developed is suitable for real-time image processing on FPGA platform.

Keywords: DWT, Image compression, BZFAD multiplier, FPGA, Lifting scheme


Title of the Paper: A Novel Algorithm for 2D-IIR Filters Synthesis via 2D-FIR Filters Model Reduction

Authors: Amel Baha Houda Adamou-Mitiche, Lahcène Mitiche

Abstract: A digital 2D-FIR filter with linear phase, circularly symmetric with respect to the origin of the frequency plane, is designed using the two-dimensional windowing method. An economical filter with high information efficiency is obtained by applying the balanced realization method to this full-order filter. The result is a linear phase IIR filter whose frequency response is very close to that of the initial filter.

Keywords: Digital two-dimensional filter, windowing, state-space, balanced realization, separable denominator, model reduction


Bulletin Board

Currently:

The editorial board is accepting papers.


WSEAS Main Site