Jump to content

Thomas Worth

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by Thomas Worth

  1. Hey y'all. I'm the developer of 5DtoRGB.

    If you're wondering what happened, well, I stopped working on it a long time ago to concentrate on mobile apps. I removed it from the App Store because it needed a lot of bug fixes and I didn't feel right charging for an app that I knew wasn't going to live up to people's expectations.

    I still have all the code, and if there's enough interest I can take a look at it. I actually planned on releasing a major 5DtoRGB update a while ago that would have dramatically sped up transcoding, but never got it stable enough to release. Then our Rarevision VHS mobile app happened, and uhm, yeah. That took priority, obviously.

  2. There is a confusion here. Y'CbCr 4:4:4 is NOT R'G'B' and you won't ever be able to get back to "true" R'G'B' (ie having every legal codeword representing a discrete color) once your signal went through the matrix transform that convert it to luma and a pair of colour differences, whatever you do. Transcoding to Y'CbCr (with or without chroma subsampling) means that 3/4 of the codewords won't represent a valid R'G'B' combination : you lose signal to noise ratio and won't get it back when transcoding to R'G'B' again. Using 8, 10 or one million bit per channel won't change the fact that you'll still ditch 3/4 of your possible colors when going through this matrix transform and won't get them back.

     

    What I meant by "true" RGB is that the RGB recovered through the matrix transform is based on discrete YCbCr samples at 1/4 res. In other words, there is one Y,Cb and Cr sample for each pixel prior to transforming (YCbCr 4:4:4). The RGB is recovered from these full res planes. You say I "won't ever be able to get back" true RGB, and this is correct since R,G, and B are mixed when matrix encoded. However, in the context of an H.264 4:2:0 camera the meaning should indicate that the limitations of subsampled color have been overcome.

     

     

    Regarding the 8 to 10 bits stuff, what does your app exactly do Thomas ? Averaging the luma value of 2x2 blocks of pixels ?

     

    It sums the 2x2 luma samples to a 10 bit value.

  3. Just not 10bit 4:4:4 YCbCr as suggested now and previously. It is a bit misleading, it's also missleading to suggest no one has ever done it before, anyone using Matlab, Avisynth or even I'd guess a nodal compositor like Nuke may well have done the necessary math on 1080P to 480P 4:4:4 or RGB in the past, who knows, who really cares. And on any camera, doesn't have to be the GH4.

     

    It's not 10 bit YCbCr, and I've been careful not to mention that. Everyone knows you can't get 10 bit YCbCr from an 8 bit source even with a 1/4 downres (maybe an 1/8 downres, though). The claim is that the result is 10 bit 4:4:4, which it is. The DPX file wouldn't work otherwise. :) I think I've been clear that due to the limitations of the 4:2:0 color planes, only the luminance channel contains "true" 10 bit information. The color planes are 8 bit, and since there's only 1/4 the resolution in color (4:2:0), I can't sum the color planes and maintain the full resolution. There are still discrete chroma samples at 2K (so it's true RGB/4:4:4), albeit with a combination of 10 bit luma and 8 bit chroma. Keep in mind, however, that luma contributes to RGB, so all three color channels are being derived from a mix of 10 bit and 8 bit information. Green is almost all from luma, so the green channel is going to be almost all 10 bit.

     

    Oh and as others have suggested, this app should work with other H.264/4:2:0 cameras just fine. If people find that it offers a real benefit, I'll consider adding a GUI and ProRes 4444 export. Or just maybe add the functionality to an existing product.

  4. After reading most of this thread, one of the conclusion can be that the headline to this post is very misleading!
    The developper himself tells us that you get 10bit in the luma channel only! Which is nice.
    But never does he mention full 10bit 4:4:4...

     

    The DPX files written are most certainly 10 bit 4:4:4. As you mentioned, the luma channel has "real" 10 bit data which contributes to all three RGB color channels when combined with the chroma information via matrix math.

  5. Just tried it and works to output the DPX, the files look superb. I opened frames in Premiere, which does not treat it like a clip, but the frames grade superbly and look amazing.

     

    However Resolve 10.1 gives me flickering playback with some frames shifted, some half black, to a random degree. Anyone else had success with Resolve yet?

     

    So the frames play back in Premiere, but Resolve has trouble? Can you check the frames that are half black in Photoshop or some other program to confirm they're ok?

  6. I mentioned this in another thread, perhaps someone could clarify whether the fact 8bit 4:2:0 doesn't use the full 8bit levels range unless its JFIF standard then should these calculations include that?

    Also the GH4 keeps getting talked up regarding the conversion to 10bit but to clarify surely any 4k cam output could do the same in theory.

    Not sure whether Thomas is using QT or FFmpeg for decompression but these GH4 movs do appear to be JFIF and flagged full range so QT appears to immediately clip outside of 16-235 at decompression? Certainly looks that way from Driftwoods GH4 native source and buggers muddle of a encode on Vimeo.

     

    The GH4 files are flagged Rec. 709, which implies broadcast range levels. I use the Rec. 709 matrix in gh444, which seems to be correct so this all looks like good intel. However, I've seen cameras that output full range even though they're flagged broadcast range. As you mentioned, QuickTime clips the data which is why 5DtoRGB is a good tool to recover the lost highlight detail.

     

    FFmpeg's H.264 decoder doesn't care about levels/ranges. It just outputs whatever is in the file. It's up to the programmer to make sense of that depending on what type of file they're writing. In this case, it's uncompressed 10 bit RGB (full range) so I leave the levels alone. That said, nothing is getting clipped with DPX, obviously.

  7. Sounds great. I'm sure I'll have questions for you once I actually have the camera in my hands. Any idea about the processing power required to handle 10 bit 4:4:4? Same as 4K?

     

    It's typically easier for the CPU to process uncompressed data, but disk bandwidth is more of an issue. Since this app is designed mainly for testing, the DPX files will be large and require lots of storage bandwidth. If it makes sense, I can add the ability to save ProRes 4444 files at 2K. Generally speaking, ProRes footage is much easier to decompress and display than H.264 footage, even when it's 444/RGB. I haven't run into any problems with GH4 footage, though. The files are relatively small and any recent system should have enough CPU power to play them back in realtime.

  8. Here's a quick-and-dirty GH4 downscaler app for Mac I wrote that will sum four luma samples and write each frame as a 10 bit, 2K DPX file:

     

    http://www.mediafire.com/download/opo43u4xv5bdgxo/gh444.dmg

     

    There's no GUI, so you'll need to run it from the terminal. It's very easy. You just type this:

    ./gh444 INPUTFILE.MOV

    It'll spit the numbered DPX frames out in a folder named "dpx_out."

     

    I'd like to know if this really does offer an advantage, or if it's just wishful thinking...

  9. Anyone have an idea if the following workflow would degrade the footage :

     

    first use 5dtorgb to convert the 4K footage to 2K ProRes 444

    then start editing and grading

     

    This workflow would probably be the easiest way for people who are outputting 2K anyway, and are using a less recent editing bay.

     

    5DtoRGB isn't set up to do this at the moment, but I'm looking into adding this capability.

  10. Sorry Andrew, the 4K 420 => 2K 444 math of 8.67-bits/pixels (not 10-bits) doesn't support any significant extra color depth. Here's how you can prove it to yourself: grade something in 4K that starts to break up due to limited color depth. Downsample to 2K- see any improvement? 2K 444 8.67 bits is still very nice at this price point.

     

    Since the camera supports 10-bit output, perhaps a future firmware upgrade will support 10-bit H.264 (supported by the spec and used in Sony's XAVC). It might not happen for a while due to upline cameras such as the S35 Varicam, but should be possible.

     

    The 10 bit figure is achieved by summing the values of four 8 bit pixels, which automatically downsamples to 1/4 the resolution as a result. This requires special image processing designed for this exact purpose, and is most likely not being done by Compressor, etc.

  11. That's on the original files Thomas.

     

    4:2:0 artefact I think.

     

    This is the kind of thing you avoid when:

     

    A - You record via the 10bit 4:2:2 HDMI output

    or

    B - You downsample to 2K 4444 ProRes

     

    Let's test this to be absolutely certain. If it is indeed an artifact of the color sampling, then the artifact will not be present in the Y channel. This can be verified by transcoding the original with 5DtoRGB using the "None" setting for decoding matrix. It will show Y, Cb and Cr as R,G, and B in the output file.

     

    Any way you can post originals? I'd like to do some experimenting over here.  :)

  12. If you're trying to downsample GH4 footage to 2K RGB, this isn't really the way to do it.

     

    The proper way is to do the RGB downsample on the original H.264 YCbCr data before it's been meddled with by Compressor or AME or anything else. Posting ProRes files doesn't help because the the original 4:2:0 data was screwed up when it was transcoded to 4:2:2. Furthermore, we're working with a second generation copy, which is a no-no.

     

    Post the original files from the camera. I'll have a look and see if I can whip up an app that will do what everyone wants.

  13. [quote name='Axel' timestamp='1342019912' post='13749']
    Very interesting what you said about Premiere before. As I understood you - please correct me with patience - what I see as a preview of the mpeg4 original (all the codecs in question are mpeg4 in the end) in Premiere is also a decompressed version, just one that is not shat as huge file onto my hard drive, it is rendered "on the fly" and not saved.
    [/quote]

    Yes, Premiere decodes H.264 in realtime the same way any video player would (like VLC, for example). What you're looking at has been decompressed and its chroma rebuilt to RGB for display in the source/program monitors.

    [quote]
    That's my objection regarding 5D2RGB. Thousands of films using the trendy codecs are edited with any of the afore-mentioned softwares and published, and - whether you see through all the intricacies discussed here or not - you never see striking differences that lead you to the conclusion that one NLE is better than the other.
    [/quote]

    This is, of course, completely subjective. Some may be annoyed by the blocky artifacts QuickTime produces around red taillights or traffic lights in night exterior footage. Some may not care. With 5DtoRGB, you always have the option to transcode without artifacts (and for free, too!).

    [quote]
    One of the advantages of Premiere is it's native workflow. My Adobe-teacher friend (PC) refuses to use QT, and I can understand him. Premiere isn't as fast with ProRes as with the native codecs. Can it be that the PC QT-version is still not fit for 64-bit? I don't know. All I say is, if you like intermediates, work with FCS ...
    [/quote]

    You must be talking about the Windows version of Premiere. On the Mac, editing ProRes in CS6 is still faster than editing native H.264. I edited my last film ([url="https://vimeo.com/42391572"]watch here[/url]) in Premiere CS6, but only after using 5DtoRGB to transcode everything. ProRes is much easier to decompress in realtime than H.264. In general, I-frame only formats will be easier for the NLE to work with compared to GOP-based formats.
  14. [quote name='Axel' timestamp='1341928279' post='13683']
    The bottom line is, OSX doesn't do any harm to my precious GH2 or 7D footage (as the title says [i]screws it[/i]).

    But 5D2RGB does. Because for Premiere, the access to the original data is lost. Okay, this is no disaster, since it is the old FCS workflow (or Cineform, DNxHD or the like), and who ever heard about serious quality loss? However, Premiere doesn't [i]need[/i] it. As you said, yellow, it can map the values anew, using 32-bit precision, perhaps a more reliable procedure than to toss away the original.[/quote]

    This is a non-issue. I'll explain. There's nothing Premiere is doing that 5DtoRGB isn't. They both take 4:2:0 H.264 and re-create an RGB (or 4:2:2, in the case of ProRes) representation of the original data. Actually, you should really be referring to the MainConcept decoder, which is responsible for supplying Premiere with decompressed H.264 output. At that point, all Premiere is doing is reconstructing 4:2:2 or RGB from the decompressed source.

    Remember that the "original" data is useless without a bunch of additional processing. H.264 video stored as 4:20 (full res luma, 1/4 res red and 1/4 res blue) must, in 100% of cases, be rebuilt by the NLE or transcoding program into something that can be viewed on a monitor. It's this "rebuilding" process that sets one NLE apart from another. FCP7/QuickTime does a pathetic job rebuilding H.264. Premiere is better. 5DtoRGB, of course, is best.

    Keeping the original data is always good for archival purposes, but relying on the original files as your source for editing is also a liability. The reason is because if you ever use more than one piece of software in your workflow, there is a possibility that different programs will render H.264 differently. This is certainly the case with FCP7. Anyone who's done motion graphics work knows it's a pain to get footage rendered out of After Effects to match footage in FCP perfectly. This issue is completely solved by transcoding all of your footage beforehand, [i]with one piece of software[/i] (5DtoRGB). That way, you know the footage will look consistent in every program that opens it, and all the gamma shifting/noise issues that have plagued FCP7 users for years are gone forever!
  15. [quote author=yellow link=topic=726.msg6786#msg6786 date=1341383748]
    I think you have it right there and the DPX route which I'm assuming will be 10bit should be enough where as going ti 8bit RGB is insufficient to hold the full YCbCr data in the original source files.[/quote]
    That's correct. 10 bits is enough, even when scaled to broadcast range. And you are correct that 8 bit scaled is less data than 8 bit full range, but that's assuming the 8 bit H.264 footage was transcoded to an 8 bit codec. ProRes is 10 bit, so it should be enough even though it's scaled to broadcast range.

    [quote]
    I'd hazard a guess that with regard to interpolation of edges that less refinement in 5DToRGB is perhaps due to a non RGB transform at 8bit versus a 32bit conversion in the NLE? I've not had chance to look properly at your images and may be misunderstanding your observations.
    [/quote]
    Hey Pyriphlegethon, I recall seeing what you're seeing in the past, but my copy of CS6 on the Mac doesn't have this problem. Are you using the latest version of 5DtoRGB? Can you send me a clip that renders like this in Premiere? I'd like to investigate this.
×
×
  • Create New...