Why should you shoot video in 4k?
With the release of DJI’s Phantom 3 the big discussion in our store with customers is wether they need the Professional, which shoots 4k, or just the Advanced, which shoots 1080p. The answer is pretty simple, even if it is a little technical, but I will do my best to explain it as best I can.
Ability to scale / crop
For me, shooting in 4k has nothing to do with displaying the video on a 4k device. While I have a MacBook Pro and a 5k iMac, all of my TV’s in my house and in our store are 1080p. Shooting in 4k is a tool that we use to make the most out of our footage. Since 4k footage is 200% bigger than 1080p footage, this gives us the ability to “zoom in” 2x without losing any image quality. This is extremely useful for making it look like you were closer to your subject and reframing your shot.
The following shots show varying levels of scale factor to show just how useful scaling is:
While I said that we can scale up to 200%, I included an frame scaled to 350% to show that you can actually push the limits sometimes even if it sacrifices a little image quality. In the video below, we can see the effects of the scaling and reframing on the actual video footage.
And now for the technical bit. At the recent NAB show, BlackMagic made the news and was all the rage because of their new camera that shoots 1080p uncompressed. Why was this even interesting when everyone else, including the lowly DJI Phantom 3, is shooting 4k? In a word, compression. Video footage is huge! A 7.5 minute clip from an Inspire 1 will come in at 3.39gb. The same footage uncompressed would take up 13.5gb. In order to save space and be able to write to standard micro SD cards, something had to be done to crunch the data down, this is where chroma subsampling comes in.
“Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system’s lower acuity for color differences than for luminance.” – Wikipedia
That being said, I am going to totally and completely over-simplify how this works (my disclaimer to keep the uber-nerd guys off my back). To put it into my layman’s terms, for every 4 pixels, the brightness data is on every pixel but the color data is on every 4th pixel. This is the estimated limit of compression before most people can tell the difference. If we shoot in 4k, and edit and render in 1080, because we are downsampling the footage, we munge all that data back together, resulting in better looking 1080p than had we originally shot in 1080p.
Being able to scale and crop your footage without sacrificing image quality is an extremely useful feature. Again, while my explanation of chroma subsampling is a dramatic over-simplification of the concept, the short of it is that if you want the best possible looking 1080p footage, you need to shoot in 4k and edit in 1080p.