Apple Streaming

Revision as of 21:57, 4 October 2012 by Sam.nicholson (talk | contribs) (Wrote up how HLS should work)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

HTTP Live Streaming (HLS) is a technology for Apple devices such as iPhones, iPads, iPod Touches and Safari to receive live video streams, given Apples disapproval of Flash RTMP streaming.

Please note, at time of writing (04/10/12) this is how it will work, but hasn't been implemented just yet. Should be this weekend, and this notice will be removed when implementation is complete.

Functional Overview

Broadly, video is ingested from some source (in our case the YSTV live stream) and transcoded to an H.264 format and fed to a segmenter. The segmenter splits incoming video into short chunks of a few seconds long, and generates an M3U8 playlist file with addresses of the currently active chunks. Were this a static, non-live video file the transcoded video could be passed directly to the segmenter and all generated at once, however there are are other, simpler ways to serve on-demand video, using HTML5 players. The segments and playlist file are published to a directory on a web server, and the client reads the playlist file, and uses this to download the chunks listed and play them back to back. Meanwhile the playlist file is continually updated to remove the old files and add new files in.

Encoding

Ideally for HLS we would take a second copy of the live video from the Intensity Shuttle, or run a second server with another capture unit, however the former does not appear to be possible, and the latter is prohibitively expensive.

Instead, ystvencode which acts as the encode server for HLS (given its spare cycles most of the time), reads the high-quality internal stream (stream2) using rtmpdump, and transcodes it with FFMPEG as in the following command. This generates 480x320 H.264 encoded video, using the -re switch to have FFMPEG run as close to realtime as possible, and pipes it out to STDOUT. The segmenter, from https://github.com/johnf/m3u8-segmenter receives the video stream via a pipe and segments it into 3 second long chunks, with 3 active at a time. The complete command is:

rtmpdump -r "rtmp://ystvstrm1.york.ac.uk/live/stream2" -v | ffmpeg -re -i - -f mpegts -acodec libmp3lame -ab 64000 -s 480x320 -vcodec libx264 -b:v 480000 -level 30 -aspect 480:320 -g 30 -async 2 - | ./m3u8-segmenter -i - -d 3 -p iphone/stream1 -m iphone/stream1.m3u8 -u http://ystv.co.uk/static/ -n 3

This runs in /usr/local/m3u8-segmenter, and the iphone directory is a mount to the /data/webs/ystv/static/iphone folder on the webserver (served over NFS).

Clientside

For the clients, a simple browser detection script scans to see if the user is on an iPhone or similar, and if they are queries the database for an HLS stream name for that stream. If one is found, it is prepended with the site address (http://ystv.co.uk/ unless on a development server) and presented as an image of the YSTV logo in place of the usual live stream box, linked to the M3U8 playlist. Normally the user's device will then render a play symbol over the top, tapping this will open the phone's native HLS player.