Jump to content

ronfya

Members
  • Posts

    4
  • Joined

  • Last visited

Everything posted by ronfya

  1. THAT is actually a very good question ! Because all of this will be 90% useless if not long enough recording can be pulled on set for movies ... with actors waiting for the cam to cool down or something weird like that.
  2. Yellow, thank you so much for your explanations and links to even more explanations :D That is uncovering some foggy issues about conversions. It is quite strange that necessary operations like import/export/convert are so often overlooked. We don't know exactly what's going behind the curtain, when it seems to be fine we don't question the process, but when it is wrong, we cannot say why and find a solution. Well ... more learning is needed. [quote][quote]6. Do you have any practical workflow tutorial from start to finish anywhere ? Hopefully ? I know these are a lot of questions, but correctly understanding this issue and the good workarounds would be so helpful to releive what is just a pain in the ass. Thank you very much in advance ! R.[/quote] Sorry no tutorials, if you are seeing a specific problem then I could perhaps help.[/quote] So a practical question now. In your previous post, you explained 3 possible solutions to take control of proper conversions at each stage of import/export into the NLE. In order to clarify how it is done exactly [b][u]in practice[/u][/b], could you for example detail the specific steps of the process when - importing properly videos from a 7D into Premiere - setting any effects in Premiere for correct preview during editing & grading (if needed) - exporting correctly from Premiere into h.264 video for the web or such as it is displayed the same way on multimedia players (VLC, QT7, QTX, Win media player, ...) Finally what parameters do we have to change if we have to import from another type of cam (5D3 or FS100 for example) and/or edit in another NLE (FCP7, FCPX, Vegas, ...) Thank you so much again. R.
  3. whohooo Yellow, Your post has been very enlighting to me. Thank you very much for that ... even though I didn't yet catch all of it and still have my load of questions pending. :P So here I go. 1. Where did you learn about all this ? Do you have any source on the net or a good book to recommend for reading and learning ? 2. About this [quote]For many 8bit cameras this is captured with the luma (Y) free to go full range 0 - 255 and chroma 16 - 240.[/quote] Does this mean that cameras like 5D2, 5D3, 7D, D800, FS100, GH2 actually record full range luma and only limited range chroma on the memory card ? Where did you get the info about what is exactly recorded by the cams (code, range, flags ...) ? Any trusful source ? 3. About this [quote]For web based playback majority is 16 - 235, not full range, Flash supports full range in some versions i think. So if encoding for the web squeeze luma 16 - 235 for accurate playback.[/quote] Does this mean that if one uploads a video to Vimeo or Youtube, these websites read the vids supposing they were encoded within limited range BT709 ?? Consequence being that no problem occurs if that was the case, otherwise the display is wrong ?? Moreover, does this mean that when exporting for the web, we HAVE to export AND convert to limited range spaces such as BT709 ? Or export in whatever color space with the appropriate flag ? (this would be a color managed video, but as I understand, we would not have this discussion if CM was a standard in video today) 4. About this [quote]The thing to seperate here is display / playback verses aquisition, getting as much data our of the camera to grade, frankly I don't give a s--t about 16 - 235 being 'correct' to specification convertion of YCC to RGB until I encode out final delivery and squeeze levels to 16 - 235 at 32bit precision rather than have some cheap 8bit codec squeeze at ingest into the NLE. I prefer control over the process, when and where and how.[/quote] Does this mean that when programs such as Premiere CS5.5 reads natively h.264 video straight from a 7D (without transcoding), it already squeezes it to 16-235 to display the video for editing ? The app forces us to edit in the 16-235 space and export in that space too instead of editing in the 0-255 space and converting to 16-235 during export ?? On the other hand, if we transcode to formats such as ProRes before importing into Premiere, the edit in Premiere is done in the 0-255 range with the conversion to 16-235 at the end ? 5. About AVIsynth, ffmepg, AVSPmod ... Do you know of any equivalent to these in OSX ? 6. Do you have any practical workflow tutorial from start to finish anywhere ? Hopefully ? :D I know these are a lot of questions, but correctly understanding this issue and the good workarounds would be so helpful to releive what is just a pain in the ass. Thank you very much in advance ! R.
  4. Ooooohhhh hi guys, I am following the eoshd blog from some time now but I finally decided to sign up to this forum just because of this very subject !! Soooooooo glad that you came up with this one Andrew ! Because, I also made my experiments with gamma issues I encounter, and really, it's a mess. My configuration is the following : - I shoot with a 7D - Edit in Premiere CS5.5 or FCPX (depending who I am working with) running on OSX Snow Leopard. - My screen is calibrated with a Spyder 3 elite. And yes, it is the laptop screen only. And no, I do not intend to invest in an external monitor as some recommend to do for proper grading because all my videos will never be seen in another environment than computers. My goal is the following : Find a proper workflow to ensure constant gamma or "clipping state" at all stages, whatever the player or application the video is viewed in. (By "clipping state" I mean that I don't know if the tonal differences I notice are because of a gamma problem or because of the fact that I may have shadows and highlights crushed or enhanced due to Rec709 profile compensation) I know color management in video is still in the stone age but since Hollywood manages it, why could not we ? Anyway, here are my findings I made some screencaps you can see here. http://ronfya.free.fr/ronfya/all_over_the_web_data/pics/gamma_problem_comparison.jpg [img]http://ronfya.free.fr/ronfya/all_over_the_web_data/pics/gamma_problem_comparison.jpg[/img] First row are what the video looks like in the Final Cut Pro X monitor and Premiere CS5.5 Last column is the video straight out of cam viewed in VLC, MPlayer, Quicktime X and Quicktime 7 Others are exports from FCPX and Premiere with the defaults H.264 settings and viewed in the same players. After a close comparison of the screen caps, switching on/off one over another, I grouped the matches in color groups to describe the problem : Green, yellow and red groups are nearly perfect match within groups. (and in the case of greens, it is probably 100% perfect because it’s apple QT) A rough classification from dark and saturated to bright and desaturated would be : RED, BLUE, YELLOW, GREEN, PINK. [BLUE & YELLOW] are different but very close, [GREEN & PINK] as well. I can live with that. No problem. But RED is very different from [BLUE & YELLOW] which are both very different from [GREEN & PINK] The strangest thing for me being the export from Premiere which displays differently in QT7 and QTX. And the [GREEN & PINK] group, which includes the FCPX monitor is washed out in comparison to the [BLUE & YELLOW] from Premiere and even FCPX export in QT7. Notice that in QT7, if I check the “Enable Final Cut Studio color compatibility”preference, then the exported video from FCPX is also displaying well in QT7. Well, anything but consistent. Do you have any help on this ? What are your personal workflows to ensure proper tones at all stages ? Thank you sooooooooooo much !   :D
×
×
  • Create New...