Transcript
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 112
IP-BASED TV TECHNOLOGIES, SERVICES, AND MULTIDISCIPLINARY APPLICATIONS
Storisphere: From TV Watching to Community Story Telling Mu Mu, Steven Simpson, Craig Bojko, Matthew Broadbent, James Brown, Andreas Mauthe, Nicholas Race, and David Hutchison, Lancaster University
ABSTRACT Conventional television services have been increasingly challenged by the more interactive and user-centric video sharing applications. With the growing popularity of social networks and video services, users are becoming the editors and broadcasters of their own stories. User-generated video content, which provides unique perspectives from individuals, is likely to be the new medium to complement professional broadcast TV for story sharing, especially in user communities of specific interest. We have developed Storisphere to provide a web-based collaborative video content workspace for members of a community to compose and share video stories, using desktop or mobile devices. Storisphere is currently being evaluated for video storytelling by various user communities.
INTRODUCTION With the increasing popularity of electronic social media, conventional television no longer fulfills the needs of social communications. In a survey conducted by Red Bee Media, one third of users said they were more likely to watch a show live than on catch-up if there were lots of social buzz around it [1]. Deloitte’s survey of 4000 people also claimed that nearly half of 16–24-year-olds use messaging, email, Facebook, or Twitter to discuss what they are watching on TV [2]. The trend of interaction between social communications and TV watching has also been transformed by the increasing amount of story sharing and citizen journalism in user communities. The size of the social community has grown dramatically through the use of social networks, the popularity of which depends largely on the ability of users to easily create and share information using various media including text, images, and videos. The simplicity with which text can be composed and shared has resulted in the dominance of text-based media within most of the social networks. Video content, however, provides a far more vibrant and varied medium. The increased availability of video recording and sharing technologies within
112
0163-6804/13/$25.00 © 2013 IEEE
consumer devices has enabled amateur video producers to share media content with people from all around the world, spurring a new generation of citizen video reporters [3] and campaigns of community storytelling. User-generated content, which brings an immersive experience to audiences, is also complementary to high quality professionally generated content. The combination of these two content types leads to an interesting composite breed as the new social television medium. However, composing and sharing video content, especially high definition (HD) video content, requires state-of-the-art computer hardware, and expensive software, as well as video editing skills far beyond the ability of most of the general public; such obstacles have until now prevented the widespread creation and circulation of video within the digital community. Storisphere is designed to offer a story-telling environment that enables clients to navigate video assets in large media objects, making efficient use of the network. Users compose video stories by connecting the selected video assets by means of references. The Storisphere system exploits a number of state-of-the-art technologies and algorithms in metadata management, video analysis, named content networking, and web applications in order to achieve its objectives.
CHALLENGES Designing such a web-based video storytelling system for members of a large community to edit, broadcast, and report their own stories — as the professional television broadcaster does — poses a number of challenges.
CONTENT RETRIEVAL Building a story starts by identifying video assets that are relevant to a defined topic. This is usually supported by a search function, which takes a number of keywords from user input. Conventional search functions will retrieve an integral video object whose titles and descriptions best match the keywords. In social networks, people are only interested in
IEEE Communications Magazine • August 2013
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 113
particular scenes of a media object that are relevant to their story. For video storytelling, a scene could be a semantically interesting moment and event (e.g., a goal in a soccer match) in very large media objects. Without a system that allows users to efficiently navigate within media objects, a local copy of the content is usually made by an end user, and video editing tools are employed to extract the required content, which is subsequently uploaded to a video sharing web site. This leads to a number of critical issues such as copyright management and inefficient use of resources (e.g., duplicate content in the network). Hence, a storytelling system must facilitate search and navigation of media assets in any time span within media objects. To support such an advanced search function, layers of time-coded metadata information must be constructed and managed for all content in a storytelling system.
Storisphere encompasses four essential
Storiboard
functions: an integrated content and metadata management system (MediaMARS
SCN Community
COPYRIGHT MANAGEMENT For a system that allows users to watch and link content from different sources, copyright management is essential. Currently, managing content on the Internet is extremely difficult when many duplicates of content are made and distributed. A storytelling system should allow its users, both amateur and professional, to manage the way their content is tagged, shared, and used — even after the content has been used to create composite content. Existing video sharing services do not provide the level of copyright management to support a practical storytelling system.
IEEE Communications Magazine • August 2013
referencing system (MARS), a unified user interface (Stori-
Mediaplex
board), and a content networking framework (SCN), to
Figure 1. Storisphere framework.
form a complete ecosystem for
STORY MAKING AND CONSUMPTION Physically combining video clips through decoding, “stitching,” and encoding places high demands on processing power, storage capacity, and network connectivity on user devices. Such requirements on client devices are also related to the length of a video story, the number of video assets, and the quality level of the final output. Editing video stories on devices such as smartphones and tablets could fully engage the processor from other user tasks, fill up storage space with intermediate video files, and quickly drain the battery. Therefore, a storytelling system must coordinate relevant mechanisms to minimize the hardware and network requirements for story editing while maintaining an appropriate level of user experience. For collaborative community story building and sharing, a story telling system must also support user management, story co-editing, and version control for users without any particular technical background. Although special designs could be made to reduce the amount of network traffic for story making, the consumption (playback) of a story requires all corresponding media assets to be delivered to a user device. The story content distribution should be optimized for consistent user experience on different types of devices and access networks. Technologies such as adaptive video streaming and named content caching should be considered for consistent user experiences in story consumption.
plex), a media asset
storytelling.
STORISPHERE Storisphere is a storytelling system that allows a group of users to produce stories collaboratively from shared video clips on various types of client devices and networks. Storisphere encompasses four essential functions: an integrated content and metadata management system (Mediaplex), a media asset referencing system (MARS), a unified user interface (Storiboard), and a content networking framework (SCN), to form a complete ecosystem for storytelling (Fig. 1). Each of the four functions is designed to address specific challenges in content retrieval, story making and consumption, and copyright management. Although Storisphere is a comprehensive system with many advanced features integrated into it, most of the background operations such as content analysis, content caching, and content networking are carried out without human intervention. This design principle aims to allow users to concentrate on story making without considering any complex options such as export quality and video transcoding. Users interact with the Storisphere system via the Storiboard web interface, which serves as a user agent for all storytelling functions.
MEDIAPLEX Mediaplex prepares video content and associated metadata for other Storisphere functions. Specifically, content ingest and transcoding, meta-data extraction and management, audiovisual content analysis, and content chunking are automated to process all professional and user generated media objects received by the Storisphere system. Mediaplex is also the only function of Storisphere where any manipulation of audio-visual content occurs. After a content item has been fully ingested, no further manipulation of it is required. Mediaplex presents an ingest interface for both professional and user-generated content. Professional content is automatically taken from the video archive of an existing IPTV service (maintained in the Lancaster University Living Laboratory since 2009 [4]), augmented with
113
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 114
MARS is the mechanism by which
Media objects
Content ingest and transcoding
Content chunking
Storisphere is able to recognise timeaddressable media
Audio-visual content analysis
objects, and to genobjects on-the-fly by
Shots and scenes in video content Title and descriptions
assets within media erate new media
Chunked media content in quality level 1 Chunked media content in quality level 2 ... Chunked media content in quality level N
Topics extracted from subtitles and audio tracks
Meta-data extraction and management
Other meta-data
combining just the chunks that
Figure 2. Content and meta-data analysis by Mediaplex.
represent time periods that fall within or overlap with those assets.
114
meta-data from its Electronic Programme Guide (EPG). TV programs from nine U.K. TV channels are made available to Storisphere within 10 minutes from the end of the programs. Users can also upload their own content via the Storiboard web portal, which uses JavaScript and PHP to transfer large media objects (of up to 5 Gbytes) to the ingest interface. Upon reception of a media object, a unique content id is assigned, and the hash code is calculated. Duplicate uploads with shared hash codes are identified, and a link between the new content id and existing hash code is made; then the duplicate content is removed from the Storisphere system. In order to maintain a consistent user experience on various types of user devices and heterogeneous access networks, Mediaplex defines a number of quality levels (in terms of video resolution and compression configuration) for different story making and viewing scenarios. Video content is transcoded accordingly to form a group of media objects associated with a hashcode-based identity. Other Storisphere functions such as the Storiboard can dynamically select the best quality level with respect to a specific use scenario. For instance, a story can be watched on full-HD monitors using media objects at the quality level of 1080p resolution, 50 frames/s frame rate, and roughly 8 Mb/s bit rate. When the same story is made or watched on a mobile phone, much lower resolution (e.g., 360p) and bit rate (1 Mb/s) will be used. Each transcoded video file is then split into a series of chunk files for smooth quality adaptation (a mechanism similar to MPEG Adaptive HTTP streaming [5]) and partial object caching; these are managed by the Smart Content Networking (SCN) of Storisphere (Fig. 2). The size of chunks (with the range of 2–10 Mbytes) is determined by the quality level to maximize the system performance in handling media objects of different sizes. Chunking content allows a new video sequence to be composed of portions of other video sequences without any form of transcoding, which in turn allows a video file of the composition to be generated on the fly. The process of chunking is coherent with the configurations of the video encoding/transcoding process. The smallest unit of independently decodable video frames is the group of pictures (GOP). Chunk boundaries are made to fall exactly on the boundaries of one or multiple
GOPs, ensuring that each Storisphere media chunk is also independently decodable. Chunking of audio tracks also follows the same strategy as the operations on video tracks, to ensure synchronization between audio and video tracks. Media chunks will not be further fragmented, as they would no longer be independently decodable. However, a composed media object need not use the whole length of a chunk. Although a chunk must still be delivered as a whole, Storisphere allows media-level playback definition so that unwanted parts of the media are skipped during playback. One example of this design is to specify MP4 edits when processing MPEG-4 encapsulated files. A wide range of meta-data is maintained by Mediaplex to allow navigation of media assets within media objects (Fig. 2). Some meta-data items are explicitly provided by the sources of content. In practice, meta-data such as program title, description, tags, comments, and subtitles are derived from the EPG of TV programs or third party annotation, or provided by an end user when user content is uploaded. Metadata can be associated with a media object as a whole or a time range within a media object (using time-coded anchor points). Mediaplex also derives implicit meta-data through a number of multimedia analysis processes such as scene detection (which separates interesting shots from others) and subtitle analysis (which identifies scenes that are semantically relevant to certain topics). For users to fully exploit all storytelling functions, the Mediaplex announces a media object as ready for a user only when all relevant processing steps are complete. In order to reduce processing time and cost, Mediaplex carefully schedules all analysis in complex sequences for cross-dependent processes including transcoding, video analysis, and chunking.
MEDIA ASSET REFERENCING SYSTEM The Media Asset Referencing System (MARS) is the mechanism by which Storisphere is able to recognize time-addressable media assets within media objects, and to generate new media objects on the fly by combining just the chunks that represent time periods that fall within or overlap those assets. The core of the MARS module is the management of edit decision lists (EDLs). An EDL
IEEE Communications Magazine • August 2013
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 115
… …
Using EDL as a story medium, users edit a story by manipulating references to rush EDLs, which is ultimately some operations of text-based scripts. There is no limit to the size and scale of the EDL. Thus, it can represent stories of any length.
Box 1.
Media asset 1.1
Media asset 1.2
Media asset 1.3
Rush EDL for media object 1 Media object 1
Media asset 2.2
Media asset 2.1
Rush EDL for media object 2 Media object 2
Reference to media asset 1.1
Reference to media asset 2.2
Reference to media asset 1.3
Reference to media asset 1.2
Reference to media asset 2.1
User story
Figure 3. Story making using media assets by reference. is an expression of the composition of one content object (e.g. a video) from parts of one or more other assets. EDL provides frame-accurate presentation and navigation of timecodes [6]. The unit of composition is the segment that specifies a content object (by its identifier), start and end times within that object (thus specifying an asset), and start and end times within the composed object where the asset is to appear. If an EDL itself has an identifier, it can be treated as another content object and referenced by a segment in another EDL. An example of a rush EDL (an EDL that is derived for a transcoded media object at a certain quality level) is given in Box 1. The recursive structure of EDL management is exploited in the representation of rushes. A rush is a media object that contains distinct video and audio tracks, with the raw data stored as chunks fetchable by a URI. Each track is represented by an EDL (the track EDL, which has its own identifier), in which each segment references a single file chunk by using an identifier from which the chunk’s URI can be generated. Chunks are then arranged nose-to-tail in their proper sequence. A rush EDL (which again has its own identifier) then combines multiple track EDLs. Derived content can then be expressed as
IEEE Communications Magazine • August 2013
an EDL that references one or more rush EDLs, or even the EDLs of other derived content. An EDL allows the Storisphere system to cut, combine, and link media assets by reference while keeping the media object intact (Fig. 3). Using EDL as a story medium, users edit a story by manipulating references to rush EDLs, which is ultimately some operation of text-based scripts. There is no limit to the size and scale of the EDL. Thus, it can represent stories of any length. EDLs are not directly playable, but can be rendered into a dedicated media format such as MPEG-4 on the fly. The rendering process converts segments in EDL into the necessary MP4 boxes that describe the audio-visual file structure and locate the corresponding file chunks in the network. Segments that reference EDLs must first be resolved by obtaining the referenced EDL. By applying this process exhaustively, the only segments remaining will reference file chunks, thus permitting rendering. The resolution of EDL references is designed to be dynamic; therefore, quality levels need not be chosen until resolution and rendering take place. Derived content expressed using EDL identifiers therefore can refer to all versions of a media object, independent of quality. The quality level need not be specified until the resolution
115
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 116
The design of the
Request story 1
MARS system keeps the computational complexity minimal
Storisphere server
at the user client. SCN local/shared cache
However, composing
All of video story content received from storisphere content server
and editing a video Request story 1
story using remotely stored source content still has high demands in terms of
Storisphere server
network traffic. SCN local/shared cache
All of video story content received from SCN local/shared cache
Request story 2
Storisphere server SCN local/shared cache
Most of video story content received from SCN local/shared cache
Figure 4. Storisphere content networking.
encounters a segment with a virtual EDL identifier; therefore, this decision can be deferred until just before playback. Consequently, derived content can be constructed and tested at the lower qualities on a wide spectrum of consumer devices, including smartphones and tablets with limited networking, but later played back at higher qualities on more powerful devices with high-bandwidth networks. Overall, Storisphere composes new content by combining references to rush EDLs of existing content sources. This composition is performed on the server side to reduce the requirements on the client device. Computationally expensive processes such as video rendering are therefore not conducted on any user device. This allows Storisphere to be used on resourceconstrained devices, such as low-cost small-formfactor computers, tablets, and mobile phones.
SMART CONTENT NETWORKING The design of the MARS system keeps the computational complexity minimal at the user client. However, composing and editing a video story using remotely stored source content still has high demands in terms of network traffic. Story editors may continually make adjustments and preview their stories, in which case the server would have to generate and transmit the new content for each preview. This would normally be a considerable burden on the user’s network. Furthermore, stories made by different users may share identical media assets as identified by research in the domain of near-duplicate content detection [7]. To this end, Storisphere is designed to facilitate the efficient and scalable caching of content. By strategically caching selected content closer to the client, Storisphere
116
avoids inefficient requests for identical content soon after the initial request. This avoids congestion on existing network links, and no additional expenditure to increase network capacity is required. Caching in Storisphere takes a multilevel approach. A Storisphere client (e.g., a web browser) can cache the original video chunks of the stories being watched. The cached content will be exploited when the same story or a different story with any shared media assets is watched subsequently. The caching mechanism also reduces startup latency and increases responsiveness for a user. The network traffic is also reduced by cutting down the number of identical requests a client may make for the same content. The content distribution in Storisphere is fully compatible with conventional in-network caching mechanisms to provide a mutual benefit to a number of users (close to each other) rather than to an individual. When the chunks are cached further away from a user’s device, but still topologically close, additional users may also benefit from the cached content shared at that location too (Fig. 4). Compared to conventional caching mechanisms, chunk-based caching stores only the parts of media objects that are used for story making. Storisphere also employs application-level caching mechanisms in story distribution networks; this concept is adopted from content-centric networking (CCN). Each content chunk has a distinct URI, which makes it favorable for caching. It could also serve as an object identifier in a CCN, which separates it from its physical location, and permits a wide range of caching policies that would not previously have been possible. For instance, content chunks can be
IEEE Communications Magazine • August 2013
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 117
Upload Content
Your Stories
Create new story
Upload View uploads
My story in an Olympic football match in Old Trafford Centre. Edit
Your Groups
Quarter final, Japan vs. Egypt. It was such a great game. Such a pity that the Egyptians got red card...
Create new group
Olympic Closing Ceremony What a night!!
Groups you belong to: Edit
Group: Friends
Edit
Group: My zoo day
Edit
Group: Group for project: Trip to DE201
Edit
Crash scenes Compilation of carnage
Edit
My zoo story Edinburgh zoo day trip!
Search:
Search pane
Preview pane
goal football foul goal yellow japan
Edit
Comment
Search
UGC
UGC
UGC
UGC
UGC
UGC
Featured content
Featured content
Featured content
Featured content
one.lancs.ac.uk/project/ Welcome
Search:
goal football foul goal yellow japan
UGC
UGC
UGC
UGC
Featured content
Featured content
Comment
Search
Featured content
Featured content
13/38
Preview story
Save story Quality:
Preview story
E: 18
S: 16 Set start
Save story Quality:
Set end
S: 16
E: 18 Set start
Set end
Editor pane
Figure 5. Storiboard web interface. proactively pushed to caches that are close to specific users when the consumer network is relatively quiet. The synergies of this caching become particularly pertinent when users in the same geographical area are allocated the same physical cache. This is the case when you consider that facilitating storytelling among communities is one of the key aims of the Storisphere platform.
IEEE Communications Magazine • August 2013
STORIBOARD Storiboard, a front-end web portal, manages the interaction between end users and the Storisphere system (Fig. 5). Storiboard is designed to provide a mechanism to browse and link media assets as simply as possible while automating and enabling the full power of the MARS and SCN systems. The portal is implemented with a wide
117
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 118
Search:
b4rn community
Comment
Search
UGC
UGC
UGC
UGC
UGC
UGC
UGC
UGC
UGC
UGC
UGC
UGC
Figure 6. Community use case and SCN caching node.
1
http://b4rn.org.uk
118
range of HTML5 technologies, which facilitate the system to allow users to load, edit, and view media assets; these technologies were chosen as we can guarantee their performance dependent on the browser’s implementation, but also so that we can provide a consistent level of quality across platforms. The Storiboard web portal has been developed to adapt to all popular browsers, depending on the platform on which they are running. JavaScript libraries were designed and implemented to dynamically alter the appearance of HTML objects using CSS classes for consistent user experience on different end devices. After logging into the Storiboard web portal, users will be presented with a dashboard (Fig. 5) displaying their groups, stories and options to upload videos, view their uploads, or change their account settings. Each story is allocated a community group when created, and the user is also added to the group as the owner. The user can add other users and groups for collaborative editing. Once a story title has been created, a user will move to the story building area. The build area is divided into three main areas for preview, search, and storyboard purposes. Within the build page, we have defined several views from which a user can choose: a bay type layout, a layered layout, and a full screen preview for different user preferences. Users can search metadata entries for media assets by title, description, comments, or user defined tags. In the particular case shown in Fig. 5, a user retrieves video clips made at an Olympic soccer match as well as key moments such as goals and fouls in professional TV programs provided by the Storisphere backend system. User generated content (UGC) and professional content (i.e., featured content) are differentiated in the search results so that users can construct a story with respect to any copyrights. Using an HTML5 implementation, a user can drag an asset and place it onto a storyboard for editing and viewing. The video player within
the system allows users to view their compiled stories and those shared by others. Thanks to the design of MARS, placing media assets in the storyboard area and subsequently changing the order of media assets does not involve any physical processing or rendering of video objects. Only references to video assets (as defined by EDL) are updated within a story. This means that only some exchanges of XML messages are needed in the network for story editing. Storiboard also exploits cached video chunks as maintained by the SCN. For instance, if a user changes the order of video assets in a story and requests the preview, Storiboard will simply change the order of references and reuse all cached content. The preview would not require any external network traffic for video distribution. Once users have compiled a story, they have the option to send a link to their friends via email. This link directs them to a viewing area on the portal to stream any completed stories. This gives the opportunity to quickly form a community around a social event, and allows additional users to contribute and remix stories about a shared social experience. It is possible for finalized compositions to be shared via services such as Facebook and YouTube. By doing so, however, the content is exported from the Storisphere application, and users can no longer take advantage of the network efficiency of SCN.
COMMUNITY USE CASE An early version of the Storisphere system has been released to a few user communities for user feedback. In Wray in northwest England, Storisphere is being tested for story sharing and the local campaign of a community project: B4RN (Broadband for the Rural North).1 B4RN aims at bringing fast broadband to rural villages that do not have suitable Internet access via a commercial telecommunications provider.
IEEE Communications Magazine • August 2013
MU LAYOUT_Layout 1 8/1/13 3:53 PM Page 119
Residents in Wray (who are mostly farmers) have been making many video clips about how they feel socially isolated by the lack of broadband connectivity and how they have been working to build their own broadband network. Storisphere provides an ideal medium for farmers and villagers, who do not have much knowledge in video editing and rendering using conventional video editing software, to share their videos and build video stories for campaigning and fund raising. Figure 6 shows a community use case where stories of optical fiber being laid in farmland are made from shared videos in the community. Many stories could also be made with different perspectives using the same shared video archive. A wireless and portable local cache node (with a story capacity of 120 Gbytes) is also provided to optimize the story sharing in this local community (Fig. 6). There have also been many requests from community users for add-on features including overlay subtitles and links to alternative video sources such as YouTube.
CONCLUSION AND FUTURE WORK Traditional television services, which engage users in watching TV programs passively, have increasingly been challenged by more interactive and user-centric video applications. With the growing popularity of social networks and video sharing services, users are more willing to become the editors and broadcasters of their own stories. User generated video content, which provides a unique perspective from or by individuals, is, we believe, the new medium to complement professional broadcast TV for story sharing in local communities. We have developed Storisphere to provide a collaborative video editing environment for community storytelling. In this article we have described the key research challenges in building the system and the novel design aspects introduced. The Storisphere system is advanced in many aspects: it features an automated content analysis module to extract key assets from multimedia content; it adopts a unique design for video editing and composing so that high definition video can be edited by thin clients such as the iPad, using lightweight EDL; content chunking and distribution mechanisms are also adopted to minimize the network consumption at user devices. Storisphere is currently being evaluated for video storytelling by real user communities. Future work will include a combined technical and legal investigation of copyright management issues within Storisphere.
ACKNOWLEDGMENTS The work presented in this article is supported by the U.K. FIRM project (Framework for Innovation and Research at MediaCityUK, EPSRC grant number EP/H003738/1) and the European Commission within the FP7 FIRE
IEEE Communications Magazine • August 2013
Project STEER. Storisphere, and especially its MARS component, have been developed from the Open Narratives Environment (ONE), conceived by colleagues from BBC Research at MediaCityUK (Michael Sparks and Adrian Woolard), and subsequently taken forward within FIRM by Adam Lindsay and others at InfoLab21, Lancaster University. We greatly acknowledge their contribution to the work reported in this article.
The Storisphere system is advanced in many aspects. It is currently being evaluated for video story telling by real user communities.
REFERENCES
Future work will
[1] Red Bee Media, “Broadcast Industry Not Capitalising on Rise of the Second Screen,” http://www.redbeemedia. com/sites/all/files/downloads/second_screen_research.pd f, 2012. [2] Deloitte, “The Rise and Rise of ‘Second Screening’,” GfK Media, 2012. [3] M. A. Figueiredo et al., “Empowering Rural Citizen Journalism via Web 2.0 Technologies,” Proc. 4th Int’l. Conf. Communities and Technologies, 2009. [4] M. Mu; Ishmael et al., “P2P-Based IPTV Services: Design, Deployment and QoE Measurement,” IEEE Trans. Multimedia, 2012. [5] “Information Technology — Dynamic Adaptive Streaming over HTTP (DASH) — Part 1: Media Presentation Description and Segment Formats,” ISO/IEC 230091:2012, 2012. [6] A. Mauthe and P. Thomas, Professional Content Management Systems: Handling Digital Media Assets, Wiley, 2005. [7] C. Tianlong et al., “Detection and Location of NearDuplicate Video Sub-Clips by Finding Dense Subgraphs,” Proc. 19th Int’l. Conf. Multimedia, 2011.
include a combined technical and legal investigation of copyright management issues within Storisphere.
BIOGRAPHIES MU MU (
[email protected]) is a senior research associate at Lancaster University, specializing in the use of social information to improve user experience in multimedia content retrieval, distribution, and interaction. S TEVEN S IMPSON (
[email protected]) is a senior research associate at Lancaster, with a general background in networking, including multimedia, group communication, programmable networks, and network resilience. CRAIG BOJKO (
[email protected]) is a research associate at Lancaster, specializing in Web 2.0 development and front-end user interactivity. M ATTHEW B ROADBENT (
[email protected]) is a Ph.D. student at Lancaster, investigating caching mechanisms in software-defined networking. JAMES BROWN (
[email protected]) is a senior research associate at Lancaster, with a background in multimedia, wireless communication, and embedded systems. ANDREAS MAUTHE (
[email protected]) is a reader in networked systems at Lancaster whose research interests include content networking and content management, network management, and resilient network architectures. NICHOLAS RACE (
[email protected]) is a senior lecturer at Lancaster, with a background in networking and multimedia systems. More specifically, his research focuses on the areas of multimedia distribution, wireless mesh networks, and software-defined networking. DAVID HUTCHISON (
[email protected]) is a professor at Lancaster’s School of Computing and Communications, and has a long background in computer communications, networking, and multimedia systems, and on program committees and editorial boards for key international conferences, workshops, and journals.
119