Jump to content

Thomas Worth

  • Posts

  • Joined

  • Last visited

Everything posted by Thomas Worth

  1. Hey y'all. I'm the developer of 5DtoRGB. If you're wondering what happened, well, I stopped working on it a long time ago to concentrate on mobile apps. I removed it from the App Store because it needed a lot of bug fixes and I didn't feel right charging for an app that I knew wasn't going to live up to people's expectations. I still have all the code, and if there's enough interest I can take a look at it. I actually planned on releasing a major 5DtoRGB update a while ago that would have dramatically sped up transcoding, but never got it stable enough to release. Then our Rarevision VHS mobile app happened, and uhm, yeah. That took priority, obviously.
  2. What I meant by "true" RGB is that the RGB recovered through the matrix transform is based on discrete YCbCr samples at 1/4 res. In other words, there is one Y,Cb and Cr sample for each pixel prior to transforming (YCbCr 4:4:4). The RGB is recovered from these full res planes. You say I "won't ever be able to get back" true RGB, and this is correct since R,G, and B are mixed when matrix encoded. However, in the context of an H.264 4:2:0 camera the meaning should indicate that the limitations of subsampled color have been overcome. It sums the 2x2 luma samples to a 10 bit value.
  3. I completely agree. I wrote the app with the hope that others would test the theory. I'm anxious to see what happens.
  4. It's not 10 bit YCbCr, and I've been careful not to mention that. Everyone knows you can't get 10 bit YCbCr from an 8 bit source even with a 1/4 downres (maybe an 1/8 downres, though). The claim is that the result is 10 bit 4:4:4, which it is. The DPX file wouldn't work otherwise. :) I think I've been clear that due to the limitations of the 4:2:0 color planes, only the luminance channel contains "true" 10 bit information. The color planes are 8 bit, and since there's only 1/4 the resolution in color (4:2:0), I can't sum the color planes and maintain the full resolution. There are still discrete chroma samples at 2K (so it's true RGB/4:4:4), albeit with a combination of 10 bit luma and 8 bit chroma. Keep in mind, however, that luma contributes to RGB, so all three color channels are being derived from a mix of 10 bit and 8 bit information. Green is almost all from luma, so the green channel is going to be almost all 10 bit. Oh and as others have suggested, this app should work with other H.264/4:2:0 cameras just fine. If people find that it offers a real benefit, I'll consider adding a GUI and ProRes 4444 export. Or just maybe add the functionality to an existing product.
  5. The DPX files written are most certainly 10 bit 4:4:4. As you mentioned, the luma channel has "real" 10 bit data which contributes to all three RGB color channels when combined with the chroma information via matrix math.
  6. So the frames play back in Premiere, but Resolve has trouble? Can you check the frames that are half black in Photoshop or some other program to confirm they're ok?
  7. The GH4 files are flagged Rec. 709, which implies broadcast range levels. I use the Rec. 709 matrix in gh444, which seems to be correct so this all looks like good intel. However, I've seen cameras that output full range even though they're flagged broadcast range. As you mentioned, QuickTime clips the data which is why 5DtoRGB is a good tool to recover the lost highlight detail. FFmpeg's H.264 decoder doesn't care about levels/ranges. It just outputs whatever is in the file. It's up to the programmer to make sense of that depending on what type of file they're writing. In this case, it's uncompressed 10 bit RGB (full range) so I leave the levels alone. That said, nothing is getting clipped with DPX, obviously.
  8. Here's a 16 bit PNG file, converted from a DPX file output by gh444: '>
  9. It's typically easier for the CPU to process uncompressed data, but disk bandwidth is more of an issue. Since this app is designed mainly for testing, the DPX files will be large and require lots of storage bandwidth. If it makes sense, I can add the ability to save ProRes 4444 files at 2K. Generally speaking, ProRes footage is much easier to decompress and display than H.264 footage, even when it's 444/RGB. I haven't run into any problems with GH4 footage, though. The files are relatively small and any recent system should have enough CPU power to play them back in realtime.
  10. Hey guys, I've written a really simple command line app for Mac that will resample GH4 footage from 4K 4:2:0 to 2K 4:4:4 using pixel summing. This will give you real 10 bit data in the luminance channel, so it's not just doing a brute-force bump from 8 bits to 10 bits. There actually is some interesting pixel finagling going on here: http://www.mediafire.com/download/f7h950spj5hrn9f/gh444.dmg There's no GUI, so you'll need to run it from the terminal. Do this by copying the app into the directory that contains the GH4 MOV files and using the following command in a terminal: ./gh444 INPUTFILE.MOV Make sure you cd to the current directory first if necessary. You can do this by typing "cd" into the terminal, add a space, and then drag the folder containing the MOV files into the terminal window. It will automatically add the path to the cd command. It'll look like this: cd /path/to/gh4/files The app will spit numbered DPX frames out in a folder named "dpx_out." I'd love for you guys to give it a try and see if you find it useful!
  11. Here's a quick-and-dirty GH4 downscaler app for Mac I wrote that will sum four luma samples and write each frame as a 10 bit, 2K DPX file: http://www.mediafire.com/download/opo43u4xv5bdgxo/gh444.dmg There's no GUI, so you'll need to run it from the terminal. It's very easy. You just type this: ./gh444 INPUTFILE.MOV It'll spit the numbered DPX frames out in a folder named "dpx_out." I'd like to know if this really does offer an advantage, or if it's just wishful thinking...
  12. 5DtoRGB isn't set up to do this at the moment, but I'm looking into adding this capability.
  13. That's correct, which is why the results so far aren't showing any benefit. Testing them should be put on hold until we have the camera originals and can process them specifically for ~10 bit output.
  14. The 10 bit figure is achieved by summing the values of four 8 bit pixels, which automatically downsamples to 1/4 the resolution as a result. This requires special image processing designed for this exact purpose, and is most likely not being done by Compressor, etc.
  15. Let's test this to be absolutely certain. If it is indeed an artifact of the color sampling, then the artifact will not be present in the Y channel. This can be verified by transcoding the original with 5DtoRGB using the "None" setting for decoding matrix. It will show Y, Cb and Cr as R,G, and B in the output file. Any way you can post originals? I'd like to do some experimenting over here. :)
  16. Also, I don't know if this happened during transcoding or not (which is why you need to post the original files from the camera, please), but these are clearly upscaling artifacts:
  17. If you're trying to downsample GH4 footage to 2K RGB, this isn't really the way to do it. The proper way is to do the RGB downsample on the original H.264 YCbCr data before it's been meddled with by Compressor or AME or anything else. Posting ProRes files doesn't help because the the original 4:2:0 data was screwed up when it was transcoded to 4:2:2. Furthermore, we're working with a second generation copy, which is a no-no. Post the original files from the camera. I'll have a look and see if I can whip up an app that will do what everyone wants.
  18. [quote name='Axel' timestamp='1342019912' post='13749'] Very interesting what you said about Premiere before. As I understood you - please correct me with patience - what I see as a preview of the mpeg4 original (all the codecs in question are mpeg4 in the end) in Premiere is also a decompressed version, just one that is not shat as huge file onto my hard drive, it is rendered "on the fly" and not saved. [/quote] Yes, Premiere decodes H.264 in realtime the same way any video player would (like VLC, for example). What you're looking at has been decompressed and its chroma rebuilt to RGB for display in the source/program monitors. [quote] That's my objection regarding 5D2RGB. Thousands of films using the trendy codecs are edited with any of the afore-mentioned softwares and published, and - whether you see through all the intricacies discussed here or not - you never see striking differences that lead you to the conclusion that one NLE is better than the other. [/quote] This is, of course, completely subjective. Some may be annoyed by the blocky artifacts QuickTime produces around red taillights or traffic lights in night exterior footage. Some may not care. With 5DtoRGB, you always have the option to transcode without artifacts (and for free, too!). [quote] One of the advantages of Premiere is it's native workflow. My Adobe-teacher friend (PC) refuses to use QT, and I can understand him. Premiere isn't as fast with ProRes as with the native codecs. Can it be that the PC QT-version is still not fit for 64-bit? I don't know. All I say is, if you like intermediates, work with FCS ... [/quote] You must be talking about the Windows version of Premiere. On the Mac, editing ProRes in CS6 is still faster than editing native H.264. I edited my last film ([url="https://vimeo.com/42391572"]watch here[/url]) in Premiere CS6, but only after using 5DtoRGB to transcode everything. ProRes is much easier to decompress in realtime than H.264. In general, I-frame only formats will be easier for the NLE to work with compared to GOP-based formats.
  19. [quote name='Axel' timestamp='1341928279' post='13683'] The bottom line is, OSX doesn't do any harm to my precious GH2 or 7D footage (as the title says [i]screws it[/i]). But 5D2RGB does. Because for Premiere, the access to the original data is lost. Okay, this is no disaster, since it is the old FCS workflow (or Cineform, DNxHD or the like), and who ever heard about serious quality loss? However, Premiere doesn't [i]need[/i] it. As you said, yellow, it can map the values anew, using 32-bit precision, perhaps a more reliable procedure than to toss away the original.[/quote] This is a non-issue. I'll explain. There's nothing Premiere is doing that 5DtoRGB isn't. They both take 4:2:0 H.264 and re-create an RGB (or 4:2:2, in the case of ProRes) representation of the original data. Actually, you should really be referring to the MainConcept decoder, which is responsible for supplying Premiere with decompressed H.264 output. At that point, all Premiere is doing is reconstructing 4:2:2 or RGB from the decompressed source. Remember that the "original" data is useless without a bunch of additional processing. H.264 video stored as 4:20 (full res luma, 1/4 res red and 1/4 res blue) must, in 100% of cases, be rebuilt by the NLE or transcoding program into something that can be viewed on a monitor. It's this "rebuilding" process that sets one NLE apart from another. FCP7/QuickTime does a pathetic job rebuilding H.264. Premiere is better. 5DtoRGB, of course, is best. Keeping the original data is always good for archival purposes, but relying on the original files as your source for editing is also a liability. The reason is because if you ever use more than one piece of software in your workflow, there is a possibility that different programs will render H.264 differently. This is certainly the case with FCP7. Anyone who's done motion graphics work knows it's a pain to get footage rendered out of After Effects to match footage in FCP perfectly. This issue is completely solved by transcoding all of your footage beforehand, [i]with one piece of software[/i] (5DtoRGB). That way, you know the footage will look consistent in every program that opens it, and all the gamma shifting/noise issues that have plagued FCP7 users for years are gone forever!
  20. [quote author=yellow link=topic=726.msg6786#msg6786 date=1341383748] I think you have it right there and the DPX route which I'm assuming will be 10bit should be enough where as going ti 8bit RGB is insufficient to hold the full YCbCr data in the original source files.[/quote] That's correct. 10 bits is enough, even when scaled to broadcast range. And you are correct that 8 bit scaled is less data than 8 bit full range, but that's assuming the 8 bit H.264 footage was transcoded to an 8 bit codec. ProRes is 10 bit, so it should be enough even though it's scaled to broadcast range. [quote] I'd hazard a guess that with regard to interpolation of edges that less refinement in 5DToRGB is perhaps due to a non RGB transform at 8bit versus a 32bit conversion in the NLE? I've not had chance to look properly at your images and may be misunderstanding your observations. [/quote] Hey Pyriphlegethon, I recall seeing what you're seeing in the past, but my copy of CS6 on the Mac doesn't have this problem. Are you using the latest version of 5DtoRGB? Can you send me a clip that renders like this in Premiere? I'd like to investigate this.
  • Create New...