搜档网
当前位置:搜档网 › multimedia terminal system-on-chip design and simulation

multimedia terminal system-on-chip design and simulation

EURASIP Journal on Applied Signal Processing2005:16,2694–2700

c 2005Hindawi Publishing Corporation

Multimedia Terminal System-on-Chip

Design and Simulation

Ivano Barbieri

Department of Biophysical and Electronic Engineering,University of Genova,Via Opera Pia11A,16146Genova,Italy

Email:ivano@dibe.unige.it

Massimo Bariani

Department of Biophysical and Electronic Engineering,University of Genova,Via Opera Pia11A,16146Genova,Italy

Email:bariani@dibe.unige.it

Alessandro Scotto

Department of Biophysical and Electronic Engineering,University of Genova,Via Opera Pia11A,16146Genova,Italy

Email:scotto@dibe.unige.it

Marco Raggio

Department of Biophysical and Electronic Engineering,University of Genova,Via Opera Pia11A,16146Genova,Italy

Email:raggio@dibe.unige.it

Received30January2004;Revised30March2005

This paper proposes a design approach based on integrated architectural and system-on-chip(SoC)simulations.The main idea is to have an e?cient framework for the design and the evaluation of multimedia terminals,allowing a fast system simulation with a de?nable degree of accuracy.The design approach includes the simulation of very long instruction word(VLIW)digital signal processors(DSPs),the utilization of a device multiplexing the media streams,and the emulation of the real-time media acquisition.This methodology allows the evaluation of both the multimedia algorithm implementations and the hardware plat-form,giving feedback on the complete SoC including the interaction between modules and con?icts in accessing either the bus or shared resources.An instruction set architecture(ISA)simulator and an SoC simulation environment compose the integrated framework.In order to validate this approach,the evaluation of an audio-video multiprocessor terminal is presented,and the complete simulation test results are reported.

Keywords and phrases:system-on-chip,multimedia,HW-SW codesign,DSP,simulation,VLIW.

1.INTRODUCTION

Architecture achievements of the last years followed the HW-SW codesign approach transferring functionalities from hardware to software implementation[5]and moving devel-opers toward system-on-chip(SoC)programmable devices. Otherwise SoC application-driven design seems to be the answer to ful?ll multimedia applications requirements[5]. Moreover,thanks to the VLIW[6]architecture approach,it is now possible to design DSP-oriented chips that are stand-alone processors[7]reaching high degrees of parallelism[8]. Even though a number of general-purpose processors are suitable for DSP tasks,native DSP processors outperform general-purpose devices in terms of their cost-performance ratio and power consumption[9,10].

In the following,a system-on-chip design based on a dual core DSP architecture will be described.The approach is based on HW-SW codesign,simulating at the same time the instruction set architecture(ISA)and the complete SoC,and taking into account single device performance together with run-time interactions between cores and peripheral devices.

2.MAIN ISSUE AND RELATED WORK

An HW-SW codesign environment is an application-driven tool chain able to evaluate and to modify both SW and HW at the same time.The software developer challenge is often to optimize a standard-based multimedia application [11,12]to get real-time performance on a given architec-ture.The best result,following a given design constraints

Multimedia Terminal System-on-Chip Design and Simulation2695

list,is achievable by tuning both the algorithm implemen-tation and architectural features.Instruction simulators are nowadays widely used in developing application-driven ar-chitecture design.Architecture simulation incorporates sev-eral simulation techniques ranging from interpretive simu-lation to application-architecture-speci?c compiled simula-tion techniques.Even though the compiled simulation ap-proach allows better peak performance in terms of simulated instructions per second,interpretive simulation appears to be more?exible[5].Most of the state-of-the-art multimedia and DSP device solutions are based on SoC.The develop-ment environment should allow simulating run-time inter-actions between core and peripheral devices in an SoC.In this scenario,the interpretive approach best matches hetero-geneous devices simulation,especially regarding interdevice run-time interactions,which are not easily predictable a pri-ori.An example of an SoC simulation environment using an ISS integration approach is introduced in[13].The proposed solution uses cycle-accurate ISSs integrated with the archi-tecture design environment.The work in[13]also analyses tradeo?s between ISS and communication interfaces in SoC simulation environments and highlights the importance of architectural visibility and of debug interface.The work in [14]presents an innovative approach for the design of ap-plication speci?c multiprocessor https://www.sodocs.net/doc/5d12560832.html,ing a generic ar-chitecture model as initial template,available resources are divided in software,hardware,and communication compo-nents.In the SoC validation process,CPUs are replaced by cycle-accurate ISSs.However,cycle-accuracy often leads to low performance,whereas a vertical optimisation for im-proving simulation speed will have a negative e?ect on?https://www.sodocs.net/doc/5d12560832.html,ually simulation environments try to achieve the best compromise between speed and accuracy depending on the scope of the development environment.

The methodology introduced in this paper is based on both architecture and SoC simulation.The presented ap-proach employs an interpretative ISS integrated in an SoC design tool for fast simulation in order to obtain an e?ec-tive architectural exploration and to investigate new SoC https://www.sodocs.net/doc/5d12560832.html,pared to other existent hardware design tools, this methodology allows to obtain e?cient system simula-tions in a short time with parametrical accuracy and good observability.

An e?cient HW-SW codesign development environ-ment should also allow continuous access to registers and bus values during the simulation providing statistics about the use of the di?erent resources[11].The capability to col-lect statistics on both system devices,for example,internal resource utilization,and interdevice communications allows a software developer to better focus his work either on a par-ticular application or on a portion of the selected code.

In this paper we demonstrate the e?ectiveness of using integrated architectural and SoC design environments,re-ducing the time to identify major design constraints and bot-tlenecks starting from the software application development. The described methodology does not limit hardware accu-racy,allowing to perform subsequent steps of the system de-sign.

The proposed SoC solution is mainly based on two VLIW cores working in parallel,a multiplexer,and a bus arbiter. The multiplexer component implements the H.223stan-dard[15].H.223has a?exible mapping scheme that is suit-able for a variety of media and can handle variable frame lengths.This solution is independent from streaming appli-cations allowing any composition of di?erent media chan-nels(audio/video/data communication control)and imple-ments di?erent protocols,namely H.223annexes A and B [15],depending on the type of multimedia terminal,such as wired PSTN-ISDN[16],and wireless3GPP[17].

The H.223?exibility and the utilization of pro-grammable DSPs together with the de?nition of a generic communication interface that controls the devices’be-haviour and the interactions among the SoC modules,allow the design and evaluation of multimedia terminals using dif-ferent streaming standards.

The chosen SoC scheme,modules,and HW-SW parti-tion,including two general purpose cores,one mux-speci?c-device,and memories,have been selected to address the streaming process for a given family of multimedia terminals consisting of ITU-T H.32x and3GPP standards.SoC scheme parameters are,for example,memories size,bus width,and multiplexer con?guration.From the signal processing point of view,the scheme supports several compression algorithms such as H.263,MPEG4,G.723,and AMR using,as described above,di?erent parameters.

The organization of the paper is as follows.In Section3, the SoC modelling the multimedia terminal is described. This section details the most important devices composing the overall system;attention is focused on main features and the communication interface.The proposed approach has been validated on an audio-video multiprocessor system re-ported in Section4.

Simulation test results for this case study are also reported in Section4.Finally,conclusions are drawn in Section5.

3.SYSTEM-ON-CHIP DESCRIPTION

The SoC includes two processors implementing audio and video compressions.Each processor is connected to three ex-ternal memories:a program memory,a data memory,and a memory containing the audio or video data to process.The third memory is loaded at the reset phase with the proper data to process and it is utilized,together with a timer,in or-der to simulate the real-time acquisition(audio and video) behaviour.

Audio and video memories are loaded at the simu-lation reset-phase starting from a speci?ed address.The core processors load audio/video data according to the real-time acquisition timers and process the input data pro-ducing compressed streams to be sent to the multiplexer (MUX).The audio coder algorithm synchronizes the pro-duction of the multimedia terminal output stream.There-fore the audio processor generates an interrupt at the end of each audio-frame processing,signaling to the multiplexer that the data are available to compose a new MUX-frame.

2696EURASIP Journal on Applied Signal

Processing

Start-video

Start-audio

Reserved for slave1Reserved for slave0Figure 1:Audio-video system.

The multiplexer put-together the available data to form an audio-video packet according to the H.223standard syntax and writes the produced stream to an output ?le.

The communication between processors and memories occurs through a bus arbiter,that also regulates the accesses to the shared-resource multiplexer.

The SoC architecture has been designed and simulated using the MaxSim environment.MaxSim is a simulation en-vironment for easy modelling and fast simulation of inte-grated SoC with multiple cores,peripherals,and memories.The cycle-based scheduler and the transaction-based compo-nent interfaces enable high simulation speed while retaining full accuracy.The MaxSim system can be used as a stand-alone simulation module or integrated into HW simulation or HW/SW cosimulation tools [3].

The whole system simulation occurs without any I/O op-erations except for the storing of the MUX stream to the out-put ?le.Figure 1shows devices and connections in the sim-ulation.The modules composing the proposed SoC are de-tailed in the rest of this section.3.1.Bus arbiter

This module regulates all the communications between master (core processors)and slave devices (e.g.,memo-ries,MUX)in the SoC.The bus arbiter receives the read-from/write-to memory requests from the two masters and sends them to the appropriate slave device depending on the address range.It also resolves simultaneous access re-quests to shared resources.The bus arbiter interface includes two ports connected to master devices and six ports con-nected to slave devices.The two data memories datamem0and datamem1are reserved for the master devices 0and 1,while other memory devices (datamem2,datamem3)can be used without constraints.The port named arbitratedport is reserved for connecting the shared device.In case of simul-taneous access to shared devices,the bus arbiter grants the

access to the higher-priority master device.In this project,the higher-priority master is the processor device simulat-ing the audio compression (device 0).Data in memories are stored starting from the address 0x00000000,therefore the bus arbiter uses an o ?set part of a data address to identify the memory-device and the displacement to ?nd the internal memory address.During the simulation phase,the bus ar-biter collects statistics on both read and write accesses from master devices,moreover it measures the number of contem-porary shared-resource request (con?icts).3.2.

Multiplexer

The multiplexer receives audio and video data and puts them together in a MUX-frame packet following an H.223-like syntax.Each MUX-frame consists of a header followed by audio and video data.The header is composed of the follow-ing.

(i)Start of frame:two bytes usable as CRCa,and so forth.(ii)MUX table index:indicates the MUX-frame composi-tion (audio and/or video).

(iii)MUX-frame size:indicates the total size in bytes of the

MUX-frame.The number of the audio bytes in a MUX-frame is a multi-plexer parameter,varying depending on the audio compres-sion algorithm;it is a ?xed size for each MUX-frame.The number of video bytes in a MUX-frame is a variable value in-?uencing the MUX-frame total size;it can vary among MUX frames up to the maximum MUX-frame size and is there-fore limited by the physical channel bandwidth.The audio processor synchronizes the multiplexer generating an inter-rupt at the end of each audio processing,signaling that the data are ready for composing a new MUX-frame packet.In order to continuously process data,the multiplexer uses two ping-pong bu ?ers.When an audio interrupt is received,the ping-pong bu ?ers are swapped allowing the multiplexer to

Multimedia Terminal System-on-Chip Design and Simulation2697

assemble the data from header,audio,and video compo-nents while audio and video processors keep sending data. Since the ping-pong swapping of the bu?ers is controlled by the audio interrupt,a delay in audio processing would re-sult in an over?ow of the video bu?ers.In order to avoid the video bu?er over?ow,the ping-pong swapping is also acti-vated whenever the video bu?er size limit is reached.

3.3.The VlIW-SIM component

The VLIW DSP cores are simulated using the VLIW-SIM ISA simulator.VLIW-SIM is an interpretative recon?gurable ISA simulation environment supporting both application design and architectural exploration.A dynamic library version of the simulator has been developed in order to utilize VLIW-SIM not only as stand-alone core simulator but also as a com-ponent in a more complex HW-SW codesign environment. The ISS has been developed in pure C-language in order to reach high simulation speed[1].

The VLIW-SIM component o?ers all the functionalities of the stand-alone version,such as resource observability and application pro?ling,allowing to change the simulated target processor to maintain the same component interface without a?ecting the other modules of which the SoC comprises.

The interface between VLIW-SIM and other devices is implemented through communication ports.The read/write operation on a data memory port is divided into two phases: access request and check for grant.The VLIW-SIM component is able to either stall,for example,when the bus controller does not grant the access for a read or write operation,or generate an interrupt to signal a particular event to other SoC modules.

The data to be processed by the coding applications are stored in data memory starting from a speci?c ad-dress depending on the media type.The coders process one frame and store the processed data to the output bu?er at the proper MUX-?eld starting address.As explained in Section3.2,the multiplexer waits for the audio interrupt sig-nal to generate the MUX-frame packet and to swap ping-pong bu?ers for receiving the successive compressed frames.

To better simulate the real-time acquisition,a timer is used in order to start the compression of each frame.

4.A CASE STUDY

The presented design approach is here utilized to design and evaluate a multiprocessor system for audio-video compres-sion.The simulated multimedia terminal is based on two ST210[18]processors working in parallel,one dedicated to the compression of a video stream following the ITU-T H.263[19]standard protocol,and the other one executing the ITU-T G.723[20]speech compression.Data generated by the two processors are multiplexed together in an H.223-like[15]format.The selected applications belong to a set of multimedia algorithms preventively de?ned as relevant ap-plication to verify SoC performance and code optimization. The reported tests have been performed on optimized im-plementations of the ITU-T standard algorithms speci?cally implemented for the addressed target architecture.

The VLIW-SIM simulator has been con?gured to simu-late the following ST210architecture.

(i)Clock rate:250MHz.

(ii)I-cache:32kB,directly mapped.

(iii)D-cache:32kB,4-way associative(round-robin block replacement).

The real-time acquisition constraints are taken into account by timers since the G.723standard expects to process an au-dio frame every30millisecond.The produced output stream has been decoded using a standard A/V terminal decoder in order to check stream compatibility.

Figure2shows the MaxSim project for the described sys-tem.It is composed of two ST210cores and their correspon-dent program memories,four data memories(two for au-dio processing and two for video processing),two devices for audio and video memories initialization(called memload-ers),a bus arbiter,a multiplexer,and two real-time acquisi-tion timers.

The bus arbiter measures audio and video processor ac-cesses for read and write operations taking also into account simultaneous multiplexer accesses and stalls.

Table1shows the collected statistics about the processors and the bus arbiter.Tests described in this document have been performed using the following platform.

(i)OS:Win2000,service pack3.

(ii)CPU:Pentium IV2.0GHz.

(iii)RAM:256MB.

The cycles entry in Table1represents the number of clock cycles without stalls due to cache miss,which for VLIW ar-chitectures is the number of executed long instructions.Op-erations is the number of elementary instructions executed in the simulated application except for NOP instructions, a number of instructions ranging from one to the long in-struction size are executed in parallel for each cycle.D-cache-misses is the number of write and read misses in data-cache, while I-cache-misses represents the number of read misses in instruction-cache.Data-mem-accesses is the whole num-ber of accesses in data memory including both D-cache hits and misses,while the whole number of accesses in program memory comprising both I-cache hits and misses is denoted by instr-mem-accesses.Bus accesses on write is the total num-ber of accesses from master devices to bus for write oper-ations.Bus accesses denied is the total number of con?icts in accessing the shared device(MUX).Finally simulation time is the time elapsed to complete the application simulation.The latter measures the ISA simulator performance,and allows, together with operation,to calculate the number of simulated cycles per second.

In our test,processing28audio frames and20video frames,multiplexer access con?icts do not occur.This is mainly due to the di?erence in read/write operation fre-quency of the two algorithms giving a probability of a con-temporary access to the multiplexer near to zero.

The H.263and G.723encoders used in this test are ST210 optimized implementations,allowing a frame compression

2698EURASIP Journal on Applied Signal Processing

Figure2:Audio-video system.

Table1:Simulation statistics.

VLIW-SIM1VLIW-SIM0

Video Audio Cycles210002493210002870 Stalls156695902194281104 (%total cycles)(74.61%)(92.51%) Operations13214192543936322 D-cache misses3100741771

I-cache misses199******** Data mem accesses255352798122811 Instr mem accesses14453624446094608 Bus accesses on read180232766865049 Bus accesses on write75124441258055 Bus accesses denied00 Simulation time(s)296296 Simulated cycle/s–709310

time considerably below the real-time temporal window. Therefore,ST210s are stalling for most of the processing time (video:74.61%,audio:92.51%),waiting for real-time data acquisition.This result indicates,for example,the possibil-ity to decrease ST210s clock rates in order to satisfy power consumption constraints,or to select a di?erent architec-ture for the media processing.Otherwise,keeping the same HW con?guration,decoders,and other terminal communi-cation modules such as H.245control[21],speech recog-nition module,or video acquisition processing modules as RGB to YCbCr conversions,can be implemented on the ST210cores.Alternatively,a multichannel system can be de-veloped executing multiple coder instances on each core. 5.CONCLUSIONS

In this paper we have presented a design methodology based on an ISA simulator integrated into an SoC design environ-ment.Di?erently from other existent HW-design tools,this methodology allows to have an e?ective system simulation in a short time with parametrical degrees of accuracy and observability.Moreover,the described environment allows a “system pro?ling,”both detecting bottlenecks in communi-cation between system components and evaluating the de-gree of optimization in the simulated applications using sys-tem criteria instead of single-processor criteria.The simula-tion speed and the short implementation time for each block comprising of the creation of MaxSim components and the recon?guration of VLIW-SIM allow us to develop the SW applications algorithms together with the HW system archi-tectures.

In order to validate this methodology,the case study of a multiprocessor system for audio-video compression and multiplexing has been presented.In this SoC design eval-uation each block is reusable and allows internal architec-ture exploration to model the di?erent HW behaviors.Fu-ture work will focus on power estimation models connected with core simulation oriented description in order to take into account power e?ciency in both hardware design and application development.

Multimedia Terminal System-on-Chip Design and Simulation2699

REFERENCES

[1]I.Barbieri,M.Bariani, A.Cabitto,and M.Raggio,

“Multimedia-application-driven instruction set architecture simulation,”in Proc.IEEE International Conference on Mul-timedia and Expo(ICME’02),vol.2,pp.169–172,Lusanne, Switzerland,August2002.

[2]I.Barbieri,M.Bariani,A.Cabitto,and M.Raggio,“E?cient

DSP simulation for multimedia applications on VLIW archi-tectures,”in Mathematics and Simulation with Biological Eco-nomical and Musicoacoustical Applications,pp.83–86,Malta, September2001.

[3]AXYS Design Automation Inc,http://www.axysdesign.

com/.

[4]AXYS Design Automation Inc,“MaxSim Developer Suite

User’s Guide Version3.0,”Document Version1.04September 9th,2002.

[5]A.Ho?mann,T.Kogel,A.Nohl,et al.,“A novel methodol-

ogy for the design of application-speci?c instruction-set pro-cessors(ASIPs)using a machine description language,”IEEE https://www.sodocs.net/doc/5d12560832.html,puter-Aided Design,vol.20,no.11,pp.1338–1354, 2001.

[6]J.A.Fisher,“Very long instruction word architectures and the

ELI-512,”in Proc.10th Annual International Symposium on Computer Architecture(ISCA’83),pp.140–150,Stockholm, Sweden,June1983.

[7]P.Faraboschi,G.Desoli,and J.A.Fisher,“The latest word

in digital and media processing,”IEEE Signal Processing Mag., vol.15,no.2,pp.59–85,1998.

[8]B.R.Rau and J.A.Fisher,“Instruction-level parallel pro-

cessing:history,overview and perspective,”Journal of Super-computing,vol.7,no.1-2,pp.9–50,1993,Special issue on instruction-level parallelism.

[9]https://www.sodocs.net/doc/5d12560832.html,psley,J.Bier,A.Shoham,and E.A.Lee,DSP Processor

Fundamentals:Architectures and Features,IEEE Press,Piscat-away,NJ,USA,1996.

[10]J.Bier,“VLIW architectures for DSP:a two-part lecture out-

line,”in Proc.International Conference on Signal Processing Ap-plication and Techniques(ICSPAT’99),pp.1290–1301,Or-lando,Fla,USA,November1999.

[11]A.Jemai,P.Kission,and A.A.Jerraya,“Architectural simu-

lation in the context of behavioral synthesis,”in Proc.Confer-ence on Design,Automation and Test in Europe(DATE’98),pp.

590–595,Paris,France,February1998.

[12]A.Jemai,P.Kission,and A.A.Jerraya,“Embedded architec-

tural simulation within behavioral synthesis environment,”

in https://www.sodocs.net/doc/5d12560832.html, and South Paci?c Design Automation Conference (ASP-DAC’97),pp.227–232,Makuhari Messe,Chiba,Japan, January1997.

[13]L.Guerra,J.Fitzner,D.Talukdar,C.Schl¨a ger,B.Tabbara,and

V.Zivojnovic,“Cycle and phase accurate DSP modeling and integration for HW/SW co-veri?cation,”in Proc.36th Design Automation Conference(DAC’99),vol.1,pp.964–969,New Orleans,La,USA,June1999.

[14]A.Baghdadi,D.Lyonnard,N.E.Zergainoh,and A.A.Jer-

raya,“An e?cient architecture model for systematic design of application-speci?c multiprocessor SoC,”in Proc.Conference on Design,Automation and Test in Europe(DATE’01),pp.55–62,Munich,Germany,March2001.

[15]ITU-T Recommendation H.223,“Multiplexing protocol for

low bitrate multimedia communication,”May1995.

[16]ITU-T Recommendation H.324,“Terminal for low bit-rate

multimedia communication,”March2002.

[17]3GPP speci?cation TS26.111,“Technical Speci?cation Group

Services and System Aspects;Codec for circuit switched mul-timedia telephony service;Modi?cations to H.324,”June 2003.[18]ST200Core Architecture Manual,STMicroelectronics2000.

[19]ITU-T Recommendation H.263,“Video coding for low bi-

trate communication,”February1998.

[20]ITU-T Recommendation G.723.1,“Dual rate speech coder

for multimedia communication transmitting at5.3and6.3 kbit/s,”March1998.

[21]ITU-T Recommendation H.245,“Control protocol for multi-

media communication,”July2003.

Ivano Barbieri was born in Genova,Italy,in1969.He obtained his M.S.degree in electronic engineering from Genoa University—thesis on“Research on image quality evaluation alternative meth-ods to MSE(mean square error)in image coding systems for the subjective redundancy reduction”—and Ph.D.degree—thesis on “E?cient methodologies for multimedia communication termi-nal design and testing.”Since1995,he has been employed in the Department of Biophysical and Ellectronic Engineering(DIBE), Genoa University.Research areas are innovative approaches on im-age quality evaluation,architectural research on systems for real-time e?cient implementation of video coding algorithms explor-ing both embedded and single-chip solutions,real-time multime-dia systems(platforms,multiplexing,and control issues),DSP ar-chitecture and development environments,architecture modelling for media processing,and embedded systems for mobile(low-power)applications.

Massimo Bariani was born in Genova,Italy,in1970.He obtained his M.S.degree in electronic engineering from Genoa University—thesis on“Development and implementation of a multipoint con-trol unit for multimedia videoconferences”—and Ph.D.degree—thesis on“Modelling and simulation of VLIW architectures for HW-SW codesign of embedded systems.”He is currently employed as a researcher in the Department of Biophysical and Electronic En-gineering(DIBE),Genoa University.In the electronic system?eld, his interests include hardware design and simulation,multimedia algorithm implementation for VLIW architectures,architectural exploration,and real-time multimedia communication based on standard protocols.

Alessandro Scotto was born in Genova,Italy,in1976.He ob-tained his M.S.degree in electronic engineering from Genoa Uni-versity in2001—thesis on“Multichannel system for real-time voice coding on DSP architecture.Since2002,he has been a Ph.D.student in the Department of Biophysical and Electronic Engineering(DIBE),Genoa University.Research areas are archi-tectural research on systems for real-time e?cient implemen-tation of video-voice coding algorithms exploring both embed-ded and single-chip solutions,DSP architecture and develop-ment environments,architecture modelling for media process-ing,and embedded systems for mobile(low-power)applica-tions.

Marco Raggio was born in Chiavari,Genova,Italy,in1964.He obtained his M.S.degree in electronic engineering from Genoa University—thesis on“Development and real-time test of video compression algorithms”—and Ph.D.degree—thesis on“Imple-mentation and simulation of real-time multimedia embedded sys-tem for videotelephony application and advanced DSP architec-ture.“Since1995,he has been employed as a Research Project Man-ager/O?cer in the Department of Biophysical and Electronic Engi-neering(DIBE),Genoa University.In the electronic system?eld,

2700EURASIP Journal on Applied Signal Processing his interests involve hardware design and simulation,inter-

active real-time multimedia architecture design,that is,for

mobile terminal and surveillance systems.His activities also

involve?eld trials setup,audit,and dissemination.In the

networking?eld,he has expertise in LAN design,con?gu-

ration,maintenance,and security.He participates in on uni-

versity seminars with discussions on video coding,standards

for multimedia and streaming,DSP,industrial?eld bus,and

embedded systems.

Photograph ???Turisme ?de ?Barcelona ?/?J.?Trullàs

Preliminary ?call ?for ?papers

The 2011European Signal Processing Conference (EUSIPCO ?2011)is the nineteenth in a series of conferences promoted by the European Association for Signal Processing (EURASIP,https://www.sodocs.net/doc/5d12560832.html, ).This year edition will take place in

Barcelona,capital

city of Catalonia

(Spain),and will be jointly organized by the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)and the Universitat Politècnica de Catalunya (UPC).

EUSIPCO ?2011will focus on key aspects of signal processing theory and li ti li t d b l A t f b i i ill b b d lit Organizing ?Committee Honorary ?Chair

Miguel ?A.?Lagunas ?(CTTC)General ?Chair

Ana ?I.?Pérez ?Neira ?(UPC)General ?Vice ?Chair

Carles ?Antón ?Haro ?(CTTC )Technical ?Program ?Chair Xavier ?Mestre ?(CTTC)Technical Program Co ?applications as listed below.Acceptance of submissions will be based on quality,relevance and originality.Accepted papers will be published in the EUSIPCO proceedings and presented during the conference.Paper submissions,proposals for tutorials and proposals for special sessions are invited in,but not limited to,the following areas of interest.

Areas of Interest

?Audio and electro ?acoustics.

?Design,implementation,and applications of signal processing systems.l d l d d

??Chairs Javier ?Hernando ?(UPC)Montserrat ?Pardàs ?(UPC)Plenary ?Talks

Ferran ?Marqués ?(UPC)Yonina ?Eldar ?(Technion)Special ?Sessions

Ignacio ?Santamaría ?(Unversidad ?de ?Cantabria)

Mats ?Bengtsson ?(KTH)Finances

Montserrat Nájar (UPC)

?Mu ltimedia signal processing and coding.?Image and multidimensional signal processing.?Signal detection and estimation.

?Sensor array and multi ?channel signal processing.?Sensor fusion in networked systems.?Signal processing for communications.?Medical imaging and image analysis.

?Non ?stationary,non ?linear and non ?Gaussian signal processing .

??Tutorials

Daniel ?P.?Palomar ?(Hong ?Kong ?UST)

Beatrice ?Pesquet ?Popescu ?(ENST)Publicity ?

Stephan ?Pfletschinger ?(CTTC)Mònica ?Navarro ?(CTTC)Publications

Antonio ?Pascual ?(UPC)Carles ?Fernández ?(CTTC)I d i l Li i E hibi Submissions

Procedures to submit a paper and proposals for special sessions and tutorials will be detailed at https://www.sodocs.net/doc/5d12560832.html, .Submitted papers must be camera ?ready,no more than 5pages long,and conforming to the standard specified on the EUSIPCO 2011web site.First authors who are registered students can participate in the best student paper competition.

Important ?Deadlines:

P l f i l i Industrial ?Liaison ?&?Exhibits Angeliki ?Alexiou ??(University ?of ?Piraeus)Albert ?Sitjà?(CTTC)

International ?Liaison

Ju ?Liu ?(Shandong ?University ?China)Jinhong ?Yuan ?(UNSW ?Australia)Tamas ?Sziranyi ?(SZTAKI ??Hungary)Rich ?Stern ?(CMU ?USA)

Ricardo ?L.?de ?Queiroz ??(UNB ?Brazil)

Webpage:?https://www.sodocs.net/doc/5d12560832.html,

Proposals ?for ?special ?sessions ?15?D ec ?2010Proposals ?for ?tutorials

18?Feb 2011Electronic ?submission ?of ?full ?papers 21?Feb 2011Notification ?of ?acceptance

23?May 2011Submission ?of ?camera ?ready ?papers

6?Jun 2011

相关主题