您的当前位置:首页正文

A Survey of Wireless Multimedia Sensor Networks

2022-05-01 来源:年旅网
2011 International Conference on Innovations in Information Technology

A Survey of Wireless Multimedia Sensor Networks

Challenges and Solutions

Mariam AlNuaimi, Farag Sallabi and Khaled Shuaib

Faculty of Information Technology United Arab Emirates University

United Arab Emirates Mariam.alnuaimi@uaeu.ac.ae

Abstract— Wireless Multimedia Sensor Networks (WMSN) have recently gained the attention of the research community due to their wide range of applications and the advancement of CMOS cameras. In this survey paper we outline the WMSN applications, discuss their challenges and resource constraints. In addition, this paper investigates the proposed solutions by the research community to overcome challenges and constraints through architecture design and multimedia encoding paradigms. Moreover, some of the deployed examples of WMSN done by different research groups are also discussed. In addition, we provide a detail discussion of the proposed optimization solutions and outline research areas of possible improvements.

Keywords:wireless multimedia sensor networks, applications of WMSN, challenges and resource constraints in WMSNs,.

A. Multimedia surveillance sensor networks

Multimedia surveillance applications are used to detect, recognize and track the objects in order to take appropriate actions. These applications need to continuously capture images in order to monitor certain events [7, 14]. These applications are mainly used for detecting crimes or terrorist attacks [8, 33].

B. Traffic avoidance and control systems

Traffic avoidance applications are used to monitor car traffic and provide traffic routing advice to avoid congestion. M. Jokela [9] proposed a model of three different kinds of cameras to be used in monitoring a traffic situation around a vehicle to detect problems such as a near infrared camera, a thermal imaging system for animal detection, and a regular CCTV camera for ice and snow detection.

I. INTRODUCTION

Wireless Sensor Networks (WSNs) have been the focus of many researchers during the last decade due to the advances in low power and low cost hardware (i.e., micro-electro-mechanical systems (MEMS)[1]. A wireless sensor network consists of wirelessly interconnected devices that can interact with each other and with their surrounded environment by controlling and sensing physical parameters [2]. Moreover, continuously growing focus in WSNs can be endorsed to the many new applications which were deployed such as environment control [3], biomedical research [4], intelligent homes, and health applications [1, 5].

C. Advanced health care delivery

Health and care delivery applications are used for patient monitoring and care in remote sites like monitoring patients’ facial expression, respiratory conditions or movement and forward these images to doctors in distant hospitals to make better diagnosis. In [10] a healthcare sensor periodically captures vital signs information (e.g., body temperature, Blood pressure) and sends it to the gateway. Once the information processed by the gateway, it is forwarded to doctors to help them make an initial diagnosis. After that, wireless multimedia

During the last few years, Wireless Multimedia Sensor sensor nodes used to capture and send back images or videos Networks (WMSNs) appeared. WMSNs technology have data to help doctors obtain more detailed information and emerged due to the production of cheap CMOS make final diagnosis. (Complementary Metal Oxide Semiconductor) cameras and D. Automated parking advice microphones, which can acquire rich media content from the

Automated parking advice applications keep track of

environment like images and videos. WMSN can be defined

available parking spaces and provide guidance to the drivers to

as networks of wirelessly interconnected sensor nodes

allocate free parking spaces [11, 12].

equipped with multimedia devices, such as cameras that are

capable of retrieving video and audio streams, images, and E. Smart Homes scalar sensor data [6, 31]. WMSNs are currently being used in Smart homes applications used to automate the life of several applications as outlined below. residences. They are usually used to adapt the house

environment according to the residence preferences (e.g.,

978-1-4577-0314-0/11/$26.00 ©2011 IEEE191

lighting or air conditioning, heating) based on detecting the presence of certain persons inside the house [13].

F. Environmental monitoring

Environmental monitoring application used for monitoring remote and unreachable areas over a long period of time. In these applications, energy-efficient operations are particularly important in order to extend monitoring over a long period of time. Most of the time cameras are combined with other types of sensors into a heterogeneous network, so that cameras are triggered only when an event is detected by other lighter sensors used in the network [27]. G.

Telepresence systems

battery, processing power and bandwidth. Thus, Multimedia coding techniques should be used to decrease the amount of multimedia content transferred over the network by extracting the useful information from the captured images and video streams while keeping the application-specific QoS requirements [17].

Recently a coding technique called distributed source coding [16] showed that a traditional balance of complex encoder and simple decoder can be reversed within the framework. The distributed source coding technique shifted the complexity to the base station or the sink, which allowed the use of simple encoders in sensor nodes. Distributed source coding will be discussed in more details in section III.

C. Application-specific QoS requirements

A WMSNs application has different requirements from the usual scalar sensor applications. In addition to data delivery required by scalar sensor networks, multimedia data include images and streaming multimedia content. Images are multimedia data obtained in a short time period. However, streaming multimedia content is generated over longer time periods and requires continuous data capturing and delivery. As a result, better hardware and coding and compression algorithms are needed in order to deliver QoS required by specific applications [6].

D. Resource Constraints

Multimedia sensors differ from the scalar sensor devices in terms of the type of data they are capturing. Video, images and audio data require more resources such as battery, memory, processing capability, and achievable data rates [2, 15, 19].

III. WMSNS OPTIMIZATION TECHNIQUES

Tele-presence systems enable virtual visits to some locations such as museums, galleries or exhibition rooms that are monitored by a set of cameras. These applications provide the user with any current view from any viewing point, and provide him/her with the sense of being physically present at a remote location [15, 28, 29]

The rest of this paper is organized as follows. In section two we introduce challenges and resource constraints in WMSNs. In section three we present in details the effort that has been done by the research community in order to optimize and overcome challenges and resource constraints in WMSNs through architecture and multimedia coding paradigms. In section four, we introduce examples of deployed WMSNs done by different research groups. A detail discussion of the optimization solutions outlined in section three is provided in section five. Finally, we conclude the paper along with future work in section six. II.

WMSNS CHALLENGES AND RESOURCE CONSTRAINTS

Researchers have been working hard to overcome or

In this section we discuss some of the unique requirements minimize the effect of some of the WMSNs resource and challenges for WMSNs application such as high constraints and challenges mentioned in the subsections bandwidth demand, multimedia coding techniques, and below. These efforts can be classified into two areas which application-specific QoS requirements [30]. are the architecture designs and optimization source coding

paradigms as shown in figure 1.

A. High Bandwidth Demand

Multimedia content (e.g., mages and video streams), require transmission bandwidth that is higher than that supported by currently available off the shelf sensors. For example, the maximum transmission rate of state-of-the-art IEEE 802.15.4 compliant components such as Crossbow’s TelosB or MICAz [15, 45, 46] motes is 250 kbps. As a result, multimedia sensors require higher data rates than the scalar sensor, with similar power consumption.

B. Multimedia Coding Techniques

Multimedia processing and source coding has been used to handle multimedia content over wireless sensor networks and to support real time multimedia applications. These coding techniques should be designed in such a way that they meet current resource capabilities such as memory, data rate,

Figure 1. WMSNs optimization's research areas

192

A. WMSNs Architectures

Different architectures were proposed to show how WMSNs can be more scalable and more efficient depending on the specific application QoS requirements and constraints [48]. Therefore, based on the designed network topology and architectures, the available resources in the network can be efficiently utilized and fairly distributed throughout the network, and the desired operations of the multimedia content can be handled. In general, network architectures for WMSNs can be divided into three different ones as outlined below [20, 34, 35, 36] composed of several components which include video and audio sensors, scalar sensors, multimedia processing hubs, storage hubs, sink, and the gateway [11].

Single-tier flat architecture

In this architecture the network consists of homogeneous sensor nodes with same capabilities and functionalities. All nodes can perform any function such as image capturing, multimedia processing and data transferring to sink over a multi-hop path [11, 6, 20] as shown in Figure 2.

resolution video camera sensors that are capable of performing more complex tasks, like video streaming or object tracking [11, 20, 41]. Each tier has a central hub for data processing and communicating with the upper tier. The third tier is connected with the sink or the gateway through a multi-hub path [47, 42, 43] as shown in Figure 4.

Figure 3. Single-tier clustered architecture

Figure 2. Single tier flat architecture

B. Coding paradigms:

Multimedia applications require more resources (e.g., high processing capabilities, extensive encoding and decoding high bandwidth…etc) than data sensing applications in wireless sensor networks. The goal of researchers in the area of WMSNs coding is to find a coding paradigm that has low complexity, produces a low output bandwidth, tolerates loss, and consumes as little power as possible. There are two different types of coding paradigms discussed here as outlined below.

Individual source coding

Individual source coding is the paradigm that is used in multimedia coding where each node codes its information independently of other nodes [19]. Individual source coding is simple and does not require any kind of interaction between the nodes. However, when there is a high concentration of multimedia sensors on a specific event, individual source coding results in large redundancies. This is because all sensors attempt to transmit similar data at the best possible quality to the sink or the base station, resulting in a large number of copies of the same data. These copies might cause major congestion and energy exhaustion in the network. Thus, individual source coding is still an open research area for improvement [18, 6].

Distributed source coding.

Distributed source coding refers to the compression of

Single-tier clustered architecture

Single-tier clustered architecture consists of heterogeneous sensors such as camera, audio and scalar sensors grouped together to form a cluster. All heterogeneous sensors belonging to the same cluster send their sensed data to the cluster head which has more resources and can perform complex data processing. The cluster head is connected either directly or indirectly to the sink or the gateway through multi-hop path as shown in Figure 3[11, 6, 20].

Multi-tier architecture

In this architecture, the first tier consists of scalar sensors that perform simple tasks, like measuring scalar data from surrounding environment (e.g., light, temperature..etc), the second tier consists of camera sensors that perform more complex tasks such as image capturing or object recognition, and at the third tier consists of more powerful and high

193

multiple correlated sensor outputs from sensors and the joint decoding at a central decoder at the base station or the sink node [6, 18]. Distributed source coding reverse the traditional one-to-many video coding paradigm used in most video encoders/decoders, such as MPEGx and H.26x into many-to-one paradigm. In the one-to-many paradigm, the encoders have complex encoding while the decoders are simpler. However, distributed source coding uses a many to- one coding paradigm, and exchanges the complex encoder for a complex decoder. Therefore, the encoders at the video sensor nodes can be designed to be simple requiring less resource, while the sink node or base station has the more complex decoder [20, 24, 32].

B. Sens-Eye University of Massachusetts

Sense-Eye is a three-tier network of heterogeneous WMSN for surveillance applications developed by the University of Massachusetts. The lowest tier consists of MICA2 Motes and Scalar sensors, e.g. vibration sensors. The second tier is made up of motes equipped with low camera sensors. The third tier consists of Stargate nodes equipped with [25]. The Sense-Eye surveillance application consists of three tasks object detection, object recognition, object tracking

V.

DISCUSSION

In this section, we will discuss the performance of optimization solutions which were outlined in section III. A. Architecture design

The objective of the architecture design in WMSNs is to design an architecture so that the available resources in the network can be efficiently utilized and fairly distributed throughout the network, to be scalable enough to handle the growing size in the network and to extend the energy life time of the nodes in the network [50].

In a single tear flat architecture, a set of homogeneous sensors are deployed and each sensor is programmed to perform all possible application tasks from image capturing through multimedia processing to data relaying toward the sink in multi-hop basis. Therefore, the energy of sensors will be quickly depleted preventing these sensors from achieving their objectives. Furthermore, the single tier flat architecture is not scalable enough to handle the complex and dynamic range of applications offered over WMSNs.

In a single tier clustered architecture, the cluster head performs all intensive multimedia processing on the data gathered by all sensors in the same cluster. Therefore, the processing and storage capability is determined by cluster head resources. Moreover, the energy of the cluster head will be depleted quickly [37].

In a multitier architecture approach, different tasks are distributed throughout the network. The resource-constrained, low-power elements are in charge of performing simpler tasks, while resource rich high-power sensors perform complex tasks. Most of the time, the rich high power sensors in the upper tier are triggered only when an event is detected by the low-power sensors to save their energy and reduce the amount of data sent. Moreover, Data processing and storage can be performed in a distributed fashion at each different tier. It was shown in [27] through experiments, that a multitier architecture has significant advantages over the other single-tier architectures in terms of scalability, lower cost, lower consumed energy, better coverage, higher functionality, and better reliability[47]. Table 1, represents a summary comparison of the three types of the architectures in terms of kind of sensors, processing types and storage types [41, 42].

Figure 4. Multi-Tier architecture

IV. EXAMPLE OF CURRENT DEPLOYED WMSN

A. IresNet-Intel Research Pittsburgh

IresNet is a platform for a two-tiered heterogeneous WMSN developed by Intel research group in Pittsburgh. Video sensors and scalar sensors are spread throughout the environment and collect potentially useful data. IrisNet allows users to perform Internet-like queries to video and scalar sensors. It reduces the bandwidth consumed: instead of transferring the raw data across the network, IresNet sends only a potentially small amount of processed data through the use of distributed filtering. Sensor data are represented in the Extensible Markup Language (XML). The user views the sensor network as a single unit that can be queried through a high-level language, simple query statements or more complex forms involving arithmetic and database operators [12].

194

TABLE 1. COMPARING DIFFERENT TYPES OF ARCHITECTURES

Sensors Processing

In this paper, we introduced WMSNs technologies and their different applications and discussed major challenges and

Storage resources constraints pertained to these WMSNs. We surveyed

and classified optimization techniques that have been

B. Coding paradigms investigated by researchers to overcome certain challenges. In The objective of coding paradigms is to find coding addition, we discussed these optimization solutions in details techniques that has low complexity coding, requires less showing their suitability for WMSNs. We also surveyed the

existing off-the-shelf examples of WMSNs deployments. Our processing, produces a low output bandwidth and consumes as

little power as possible [40, 44]. Table 2, represents a future work will focus on experimental deployment of comparison between individual source coding and distributed WMSNs for particular applications. In addition to that, source coding techniques. Distributed source coding performance analysis and enhancement of existing techniques require less encoder resources which in case of technologies will be studied and proposed. WMSNs is the sensor. Moreover, if more than one multimedia sensors were monitoring the same scene, they communicate TABLE 3. SPECIFICATIONS OF THE PHYSICAL LAYER STANDARDS IN WMSNS between each other to send only one copy of the sensed data.

Zigbee Bluetooth WLAN UWB As a result, the amount of data being sent in the distributed

250 Kbps 1 Mbps 54 Mbps 250 Mbps Data Rate

source coding is reduced and the network will not be 3 Mbps congested [49]. However, in individual source coding each Output power 1-2 mW 1-100 mW 40-200 mW 1 mW 10-100 m 1-100 m 20-100 m < 10 m Range node codes its information independently without

2.4 GHz 2.4 GHz 2.4 GHz 3.1 GHz Frequency communicating with other nodes. As a result, if they were

915 MHz 10.6 GHz

monitoring the same scene a lot of redundant data will be 868 MHz transmitted through the network causing it to be congested.

REFERENCES

TABLE 2. COMPARISON OF ENCODING TECHNIQUES

Individual source coding Distributed source coding The stream of data encoded by The stream of data encoded by many one source (sensor) and sensors and decoded by the rich-resources decoded by many destination. device(base station)

The encoders at the video The encoders at the video sensor nodes sensor nodes are complex and are simple requiring fewer resources. requiring a lot of resources.

The decoder is simple and The decoder at the base station is requiring less resources complex and requiring more resources No communication exist There is communication between sensors between sensors

Example MPEGx and H.26x Example Slepian-Wolf, Wyner-Ziv [1] Akyildiz, I.F.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. Wireless

sensor networks: a survey. Comput. Netw. 2002, 38, 393–422.

[2] Baronti, P., Pillai, P., Chook, V.W., Chessa, S., Gotta, A., Fu, Y.F.,

2007. Wireless sensor networks: a survey on the state of the art and the 802.15.4 and ZigBee standards. Comp. Commun. 30, 1655–1695. [3] D. Steere, A. Baptista, D. McNamee, C. Pu, J. Walpole, “Research

challenges in environmental observation and forecasting systems,” in Proc. ACM/IEEE MOBICOM ’00, Boston, August 2000.

[4] L. Schwiebert, S. K. S. Gupta, and J. Weinmann, “Research challenges

in wireless networks of biomedical sensors,” in Proc. ACM/IEEE MOBICOM ’01, pp. 151-165, 2001

[5] D. Puccinelli and M. Haenggi, “Wireless sensor networks: applications

and challenges of ubiquitous sensing,” IEEE Circuits and Systems Magazine, vol. 5, no. 3, pp. 19–31, 2005.

[6] Akyildiz, I.F.; Melodia, T.; Chowdhury, K.R. A survey on wireless

multimedia sensor networks. Comput. Netw. 2007, 51, 921–960. [7] Chitnis, M.; Liang ,Y.; Zheng, J.; Pagano, P. Lipari, G. Wireless line

sensor network for distributed visual surveillance. In Proceedings of the 6th ACM symposium on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks 2009, Tenerife, Canary Islands, Spain, October 28-29, 2009; pp.79-84.

[8] Y.-C. Tseng, Y.C. Wang, K.-Y. Cheng and Y.-Y. Hsieh, iMouse: an

integrated mobile surveillance and wireless sensor system, IEEE Computer 40 (6), 2007, pp. 60–66

[9] M. Jokela, M.Kutila, J.Laitinen, F.Ahlers, N.Hautière, and

T.Schendzielorz. Optical Road Monitoring of the Future Smart Roads – Preliminary Results, World Academy of Science, Engineering and Technology 34, 2007

[10] SHA Chao, WANG Ru-chuan, HUANG Hai-ping, SUN Li-juan, A type

of healthcare system based on intelligent wireless sensor networks, The Journal of China Universities of Posts and Telecommunications, 2010 [11] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury, “Wireless Multimedia

Sensor Networks: Applications and Testbeds,” Proc. IEEE, vol. 96,no. 10, pp. 1588–1605, Oct. 2008.

[12] J. Campbell, P. B. Gibbons, S. Nath, P. Pillai,S. Seshan, and R.

Sukthankar, IrisNet: An

Single tier flat architecture Homogenous sensors Distributed processing Centralized storage

Single tier clustered

architecture Heterogeneous sensors Centralized processing

Centralized storage

Multitier architecture Heterogeneous sensors Distributed processing Distributed storage

protocol for physical layer. UWB has low power consumption, high data-rate of short range wireless communication.

II.

CONCLUSIONS AND FUTURE WORK

C. Physical layer protocol standards for WMSN

Physical layer technologies can be classified based on the modulation scheme and bandwidth consideration [38] as three groups: Narrow band, Spread spectrum, Ultra-Wide band (UWB) technologies). Also they can be classified based on standard protocols into IEEE 802.15.4 ZigBee, IEEE 802.15.1 Bluetooth, IEEE 802.11 WiFi and 802.15.3a UWB. Table 3 summarizes the specifications of the different physical Layer Standards. ZigBee [39] is the most popular standard radio protocol used in wireless sensor networks because of its low-cost and low-power characteristics. However, the ZigBee standard is not suitable for high data rate applications such as multimedia streaming over WMSN and for WMSNs application-specific QoS. Therefore, researcher like Akyildiz, [6], believes that UWB should be used in WMSNS as standard

195

[13] internet-scale architecture for multimediasensors, in Proc. ACM

Multimedia Conf.,2005.

[14] J. Taysheng, C. Chien-Hsu, W. Jhing-Fa, W. Chung-Hsien, K. Jar-Ferr,

C. Sheng-Tzong, F. Jing, and C. Pau-Choo , “House of the future relies on multimedia and wireless sensors”, SPIE Optoelectronics & Communications, 2008

[15] Y. Faheem, S. Boudjit, K. Chen, “Wireless Multimedia Sensor

Networks: Requirements & Current Paradigm”, Plate-forme THD, 2010 [16] Wireless Modules.

[17] B. Girod, A. Aaron, S. Rane, D. Rebollo-Monedero, Distributed video

coding, Proc. IEEE 93 (1) (2005) 71–83.

[18] E. Gurses and O. Akan. Multimedia communication in wireless sensor

networks. Proc. Annals of Telecommunications, 60(7-8):799–827, 2005. [19] Misra, S., Reisslein, M., Xue, G., “A survey of multimedia streaming in

wireless sensor networks”, IEEE Communications Surveys and Tutorials, in print.

[20] Almalkawi, I.T.; Guerrero Zapata, M.; Al-Karaki, J.N.; Morillo-Pozo, J.

Wireless Multimedia Sensor Networks: Current Trends and Future Directions. Sensors 2010, 10, 6662-717.

[21] Prati, A.; Vezzani, R.; Benini, L.; Farella, E.; Zappi, P. An integrated

multi-modal sensor network for video surveillance. In Proceedings of the third ACM international workshop on Video surveillance & sensor networks, VSSN’05, ACM: New York, NY, USA, 2005; pp. 95–102. [22] Kim, H.; Rahimi, M.; Lee, D.U.; Estrin, D.; Villasenor, J. Energy-aware

high resolution image acquisition via heterogeneous image sensors. IEEE J. Sel. Top. Signal Process 2008, 2, 526–537.

[23] B. Girod, A. Aaron, S. Rane, D. Rebollo-Monedero, Distributed video

coding, Proc. IEEE 93 (1) (2005) 71–83.

[24] A. Aaron, S. Rane, R. Zhang, and B.Girod, “Wyner-Ziv Coding for

Video:Applications to Compression and Error Resilience,” in Proc. of IEEE Data Compression Conf. (DCC), Snowbird, UT,March 2003, pp.93-102.

[25] Kulkarni, P.; Ganesan, D.; Shenoy, P.; Lu, Q. SensEye: a multi-tier

camera sensor network. In Proceedings of the 13th annual ACM

international conference on Multimedia, MULTIMEDIA’05.ACM: New York, NY, USA, 2005; pp. 229–238.

[26] Wireless Multimedia Sensor Networks, <

http://www.ece.gatech.edu/research/labs/bwn/WMSN/testbed.html> [27] T. He, S. Krishnamurthy, L. Luo, 2006. “VigilNet: an integrated sensor

network system for energy-efficient surveillance,” ACM Transactions on Sensor Networks.

[28] O. Schreer, P. Kauff, and T. Sikora, 2005. 3D Video Communication,

John Wiley & Sons, New York, NY, USA.

[29] N. J. McCurdy and W. Griswold, 2005. “A system architecture for

ubiquitous video,” in Proceedings of the 3rd Annual International Conference on Mobile Systems, Applications, and Services.

[30] Y. Charfi, N. Wakamiya, and M. Murata, 2007. “Challenging issues in

visual sensor networks,” Advanced Network Architecture Laboratory, Osaka University.

[31] W. Wolf, B. Ozer, and T. Lv, 2002. “Smart cameras as embedded

systems,” Computer, vol. 35, no. 9, pp. 48–53.

[32] P. Remagnino, A. I. Shihab, and G. A. Jones, 2004. “Distributed

intelligence for multi-camera visual surveillance,” Pattern Recognition, vol. 37, no. 4, pp. 675–689.

[33] T. He, S. Krishnamurthy, L. Luo, 2006. “VigilNet: an integrated sensor

network system for energy-efficient surveillance,” ACM Transactions on Sensor Networks, vol. 2.

[34] N. J. McCurdy and W. Griswold, 2005. “A system architecture for

ubiquitous video,” in Proceedings of the 3rd Annual International Conference on Mobile Systems, Applications, and Services (Mobisys '05).

[35] S. Hengstler and H. Aghajan, 2007. “Application-oriented design of

smart camera networks,” in Proceedings of the 1st ACM/IEEE

[36]

[37] [38]

[39] [40]

[41]

[42]

[43]

[44]

[45]

[46] [47]

[48] [49]

[50]

International Conference on Distributed Smart Cameras (ICDSC '07), pp. 12–19.

A. Barton-Sweeney, D. Lymberopoulos, and A. Savvides, 2006.“Sensor localization and camera calibration in distributed camera sensor networks,” in Proceedings of the 3rd International Conference on Broadband Communications, Networks and Systems (BROADNETS '06).

T. H. Ko and N. M. Berry, 2006. “On scaling distributed low-power wireless image sensors,” in Proceedings of the 39th Annual Hawaii International Conference on System Sciences.

Wong, K.D, 2004. Physical layer considerations for wireless sensor networks. In Proceedings of Networking, Sensing and Control, 2004 IEEE International Conference, Taipei, Taiwan, Volume 2, pp. 1201–1206.

IEEE 802.15 wpan task group 4 (tg4). Available online:

http://grouper.ieee.org/groups/802/15/pub/TG4.html (accessed on 21 Jan 2011).

Capo-Chichi, E.; Friedt, J.M, 2008. Design of embedded sensor platform for multimedia application. In Proceedings of First International

Conference on Distributed Framework and Applications, DFmA 2008 , Penang, Malaysia, pp. 146–150.

S. Soro and W. B. Heinzelman, 2005. “On the coverage problem in video-based wireless sensor networks,” in Proceedings of the 2nd

International Conference on Broadband Networks (BROADNETS '05), pp. 9–16.

Margi, C.; Petkov, V.; Obraczka, K.; Manduchi, R. haracterizing, 2006. Energy consumption in a visual sensor network testbed. In Proceedings of 2nd International Conference on Testbeds and Research

Infrastructures for the Development of Networks and communities, TRIDENTCOM 2006, Barcelona, Spain.

Rahimi, M.; Baer, R.; Iroezi, O.I.; Garcia, J.C.; Warrior, J.; Estrin, D.; Srivastava M., 2005. Cyclops: in situ image sensing and interpretation in wireless sensor networks. In Proceedings of the 3rd international

conference on Embedded networked sensor systems, SenSys ’05, San Diego, CA, USA, pp. 192–204.

Chen, P.; Ahammad, P.; Boyer, C.; Huang, S.I.; Lin, L.; Lobaton, E.; Meingast, M.; Oh, S.; Wang, S.; Yan, P.; Yang, A.; Yeo, C.; Chang, L.C.; Tygar, J.; Sastry, S., 2008. CITRIC: A low-bandwidth wireless camera network platform. In Proceedings of Second ACM/IEEE

International Conference on Distributed Smart Cameras, ICDSC 2008, pp. 1–10.

Wireless sensor Platform,W.S.N. MICA-familyWireless Mote Platform Specifications. Available online:

http://www.xbow.com/Products/productdetails.aspx?sid=156 (accessed on 11 Dec 2010).

Wireless sensor Platform, W.S.N. TmoteSky Platform Specifications. Available online: http://www.sentilla.com/moteiv-transition.html (accessed on Nov 23 2010).

Tezcan, N.; Wang, W., 2008. Self-Orienting Wireless Multimedia

Sensor Networks for Maximizing Multimedia Coverage. In Proceedings of IEEE International Conference on Communications, ICC ’08, Beijing, China, pp. 2206–2210.

Sexena, R.N.; Roy, A.; Shin, J. Cross-layer algorithms for QoS

enhancement in wireless multimedia sensor networks. In IEICE Trans. Commun. 2008, E91-B, 2716–2719.

Shu, L.; Hauswirth, M.; Zhang, Y.; Ma, J.; Min, G., 2009. Cross Layer Optimization on Data Gathering in Wireless Multimedia Sensor

Networks within Expected Network Lifetime. In Springer J. Univers. ComputL SciL (JUCS).

Shu, L.; Zhang, Y.; Yu, Z.; Yang, L.T.; Hauswirth, M.; Xiong, N., 2009. Context-aware cross-layer optimized video streaming in wireless multimedia sensor networks. In Springer J. Supercomput.(JoS).

196

因篇幅问题不能全部显示,请点此查看更多更全内容