Why ProRes video encoding in post is inferior to ProRes recording

Shooting in ProRes or DNxHD has several benefits if you’re working with Final Cut Pro X or Avid Media Composer. Of course you can always encode or transcode from a different codec to either of these in post-production, but that costs time — which may be valuable. There may be other reasons why you don’t want to wait until post, as I found out. I spent three days experimenting with a GoPro HERO4, four encoding apps for the Mac, the Final Cut Pro X timeline and an Atomos Ninja Assassin. Thanks to the newest version of Telestream’s Switch QC app, I came across some strange results that I didn’t know about before, and which strengthened my views on post-production video encoding versus shooting straight to ProRes with an Atomos Ninja monitor/recorder.

Three previous generations Atomos Ninja

I started these extensive experiments because I noticed how different the native MP4 footage from the HERO4 in 4K was from footage encoded with Squeeze Pro 11 to ProRes and from that same footage recorded to the Ninja. The differences between the 4K MP4 and the ProRes file from the Ninja didn’t come as a surprise. The Ninja’s file was 1080p only, as the GoPro’s HDMI output is limited to 1080p. But it wasn’t so much the resolution or size from the Ninja’s output that drew my attention. For example, one thing that I noticed was more noise in post-encoded clips than in the MP4 file. None of the differences I noticed between the post-encoded file and the original could be explained.

So I set up an experiment using frame sizes that matched the GoPro’s HDMI output to ensure a good basis for comparison with the output rendered by the Ninja. For all of the experiments that I’m discussing here I used the HERO4 set to 1080p/60fps. I recorded to a Lexar Professional 1800x microSDHC UHS-II card.

lexar microsd 1800x card

The footage your HERO4 records to its internal media results in an MP4 file. Final Cut Pro X and most other NLEs are capable of working directly with MP4 files, so you would expect there’s little use in video encoding to ProRes first. That’s the first assumption that I was wrong about. And I found that out by timing the rendering speed of a simple process: applying a LUT with CoreMelt’s LUTx to the MP4 clip and the same clip transcoded to ProRes using EditReady.

encoding app editready

Much to my surprise, the MP4 file took 49sec to render, while the ProRes 422(HQ) file took only 22sec. That’s less than half the time needed for the ProRes file. My clip was only a minute long, imagine how much time you lose when you have an MP4 timeline of say half an hour.

Not all video encoding apps are created equal

I have published a speed comparison between transcoding apps before, but I decided to run the tests again as both Episode and Squeeze have been upgraded/updated in the meantime. However, speed isn’t the only criterion that differentiates transcoding apps.

The fastest was EditReady: it finished the job in 49sec. Squeeze Desktop Pro 11 came in second at 1min 24sec. Episode Pro 7.1 set to Priority 3 finished the job in 1min 44sec, while Apple’s Compressor took 1min 50sec. Those are all figures for the same clip of barely one minute in length.

As I said, speed isn’t the only differentiator. All transcoding jobs were done with the default settings — out of the box — for the ProRes 422(HQ) codec. It turns out these settings differ from developer to developer, with some creating a better clip in the process than others.

Compressor and EditReady don’t add (or leave away) anything. The transcoded result looks identical to the original file. Squeeze Desktop Pro 11 smoothens the image slightly, while Episode Pro 7.1 adds a bit of contrast and sharpness, resulting in a better looking clip. It all boils down of course to what you prefer: a file that looks identical to the original, or a slightly improved look — whatever that may be?

Of more importance in my opinion was that all of the transcoded results had a frame render delay — I estimate between half and a whole frame, depending on the encoding app and the motion in the clip abruptly changing. It got worse when there were sudden changes in the clip. Switch 3 Pro can show you these differences using a sort of B&W pencil drawing of the moving objects. I’m sure these differences go unnoticed when looking at the footage without the original running alongside it, but again it’s something I wasn’t expecting.

And the recording with the Ninja Assassin?

The recordings I made simultaneously with the Ninja Assassin had their own quirks, but at the very least they were more predictable and less of an issue. For example, the Ninja recording straight from the HERO4 sensor at 1080p/60fps was sharper and very slightly noisier when the GoPro was set up to shoot in Narrow mode than it was in other modes.

HERO4 Black

In 1080p/60fps Superview, the Ninja recording was identical to the original with the exception of being sharper without noise, — i.e. as much noise as in the original MP4 file. For good measure, I also tested this with the HERO4 set to 720p in Narrow mode. With the Ninja recording in this mode, its recording was identical but less noisy than the MP4 recording.

In short, shooting to ProRes directly with a Ninja Assassin came closest to the original recording, but it still pays to try out different lens settings to see which cause the least noise and offer the best sharpness.

Conclusion

Shooting with a ProRes recorder like an Atomos Ninja or Shogun saves time. That much we already knew. My experiments showed it also makes for a recording that stays closer to the original in-camera recording — at least with a GoPro HERO4. In some modes, the Ninja Assassin even created less noisy footage than the internal recording to the MP4 file.

If you have to encode in post-production and you aren’t in an environment where you need to encode from/to many different formats and codecs, then EditReady is your best choice. It’s very fast, it doesn’t add anything of its own to the encoded result and it kept up well with the sudden motion changes I mentioned earlier.

It would be interesting to see whether these conclusions also hold up when shooting with different cameras. Would I observe the same behaviour when shooting with a Sony A7, a Panasonic GH4, an ARRI Mini, a Sony FS5? I don’t know as I don’t have a bunch of them lying around. If there are camera manufacturers out there who want to let me, I’d be interested to set up the same sort of experiments with their equipment.

Advertisements