Jump to content

Towd

Members
  • Content Count

    80
  • Joined

  • Last visited

Posts posted by Towd


  1. 9 hours ago, androidlad said:

    4K RGB 4:4:4 readout mode:

     

    2 hours ago, androidlad said:

    There are various 2x2, 3x3, 6x6 binning modes optimised for speed, quality or a compromise between the two.

    Very cool!  Interesting to see how they are sampling the bayer pattern for the 4:4:4 color.  Some pretty extensive binning going on.  With this level of sampling, the speed optimized binning may still produce a really nice image with just a tad more noise.   It'll be interesting to see the final results.


  2. 2 hours ago, androidlad said:

    What's the obsession with per-pixel FWC? Don't forget the pixels sit on a 36x24mm sensor, smaller pixels don't collect any less light, it's just there are more of them doing the same thing.

    Yeah, but in the spec you posted it says it does 6k readout through pixel binning/sub sampling.  So it's throwing away data that could have been preserved.  

    It just looks more like a chip designed by a marketing department than something to serve the needs of filmmakers.  Granted it's for consumer devices and megapixels sell cameras, so it's not like I'm surprised. 

    My experience has been that engineers have very little experience with end user needs, and that was my point.


  3. 16 hours ago, androidlad said:

    Sony has some of the most talented sensor architects, how much do you know?

    I know that Sony's sensor architects don't deliver movies.  But I can see some value to a 100mpix sensor for stills.

    I have to agree with @Nikkor though, I'd love to see them continue to improve dynamic range and rolling shutter in their consumer line more than resolution.


  4. 1 hour ago, Anaconda_ said:

    Braw Adjustments - Colour - Sharpen - NR 

    You typically want to put any finishing/sharpening a the very end after of your processing.  Typically just before running it out.  How strong you make the sharpening and the radius you use will just depend on the final look you are going for and how soft your original footage is.  Depending on the sharpening filter you are using there is typically a strength and a radius value. 

    As part of your finishing process, you may also want to add grain into the image.  Opinions on whether you add the grain before or after the sharpening can vary, but I find that if I'm dealing with  an image that needs a lot of sharpening, I'll add the grain last so I'm not heavily sharpening my grain.  Conversely if I'm doing a composite and matching grain between elements, the grain will come earlier in the pipeline.  But in that case, any sharpening I'm adding back in is very minimal to just restore any detail loss due to encoding.

    If you want to dial in some specific values for minimal sharpening to make up for re-encoding your original source to  whatever your final format is, you can run a "wedge" and try a variety of values then compare the detail you see with various sharpness settings of your output video against your raw source and pick one that matches most closely.  Depending on what your source material is and what your delivery format is, this can vary wildly.  In cases where I'm working from a high res (6k-4k) source and delivering at 2k, I may not even use sharpening.

    Finally, opinions also vary about where to put any degraining, but I prefer to put it near or at the very beginning of my color pipeline.  If your degrainer is expecting something in rec 709 though, do your color transform into video space and then run your degrainer.  A little testing with your camera's source footage will go a long way here as well.  But,  I find that especially when I'm working with 8 bit source material, it can add some pseudo bit depth to the images before I begin pulling on the color. 

    So, I'd typically do:

    1. .Raw adjustments to set color balance and exposure. (if working with raw footage)
    2. Degrain/NR
    3. Color
    4. sharpen/grain

  5. 20 minutes ago, thebrothersthre3 said:

    Sounds like a good workflow. My bigger issue is just not really knowing what proper color looks like to judge skintone or really anything in the shot. 

    I once worked with a really good compositor at a large VFX house who admitted to me once that he was totally colorblind.  His trick was that he just matched everything by reading the code values from his color picker tool and matching the parts of his composite purely from the values he sampled.

    I've always remembered that when I feel I can't trust my eyes, or something is not working for me.  You can color grade just by making sure everything is neutral and balanced.  Later, as you become more comfortable with the process and gain more experience you can start creating looks or an affected grade.

    Generally to start you want to get your white balance correct.  Find something in your shot that you know should be neutral or white.  A wall, a t-shirt, a piece of paper, or anything else that should be white or gray.  After that, check your blacks and make sure they are neutral, then double check your whites.  Finally, check your skin tones and make sure they are correct.  You can do this by using the vectorscope and just getting them on the skin tone line.  Somewhere in this process you'll want to set your exposure.  I generally just make a rough exposure adjustment at the beginning so I can see everything, then dial it in once my balance is set.

    One thing I do a lot when studying how a film I like is graded is to take screen captures from a Netflix stream or other source and pull them into a project to compare the color values on it.  Then you'll have a roadmap for for what you are trying to match.  


  6. 1 minute ago, thebrothersthre3 said:

    Not sure if I have that in my adobe package.

    It's a separate piece of software you run along side of After Effects.  It looks like they still offer an evaluation version, so you can try it out.

    After effects will load a sequence using your raw settings.  LR timelapse takes care of the exposure variations.  It's been years since I've done it, so I don't remember the exact workflow, but it's something along those lines.  😁

    Also, never did it this way, but I imagine, you could run out a clip using a Photoshop timeline.  After Effects is just nice because you get its workflow with things like Lumetri color adjustments you can layer on top of your raw grade if you want to tweak things.


  7. 1 hour ago, kye said:

    What editing package are you using?

    One thing I do remember is that because cameras are often not 100% accurate with their shutter speed the images might be slightly different exposures, so you have to use a de-flicker plugin in post to even it out.

    I used to process a lot of time lapse footage for an old job.  The standard we used was LR Timelapse.  My memory is sketchy, but I think the process involved grading one raw image for the look you wanted and then running the sequence through the LR Timelapse software and it would calculate exposure variations for you and try to correct for them.  You would then export that exposure data into After Effects and run out the sequence using your raw settings.  

    Overall, it produced very nice results even if there was a lot of flickering in the original sequence.  It used to offer a trial version for testing if you wanted to give it a try.


  8. Here's my take on this footage:

    First one is my neutral grade... but with the exposure pushed way down just to add drama and get the image to pop... at least to me.  Also, threw a power window over the right pillar, so it competed less as the subject of the shot.

    Second one is my crack at a grade for a ghost story (since Kye is looking for themes😄).  In this story our little friend here is selling haunted curiosities.

    vietnam.png

    vietnamGhostStory.png


  9. You know @kye, when I was reviewing the 2 Gems Media stuff, I got generally a similar impression.  His newer stuff is better than some of his early stuff.

    I also, suspect he may just shoot the "standard" color profile on the GH5 and white balance off a wall in the room he's in.  (Or off a card.)

    His post process may not be much more than degraining, tweaking exposure, adding glow, and adjusting skin tones (if that).   That would explain the clipped highlights.  Plus the standard (non-log, Cine-D) profiles can deliver really nice results with minimal work.

    This way he can just crank out videos with minimal effort.  Whatever he's doing, it's still a nice enough end product that it seems to be keeping him busy.


  10. 7 hours ago, Mark Romero 2 said:

    Thanks for all the work you did on this. i really appreciate it.

    Yes, your images look a lot closer to what I was hoping to accomplish. But now you have to tell me EXACTLY what your recipe was :)

    At least, let me ask you how you added the glow to the highlights?

    Also, your grade seems to avoid some of the nasty color mashups on the walls (where in my video there seems to be kind of a purplish / greenish collision... don't know how else to describe it).

    And yeah, since this is sony footage, you have to protect the highlights, because if you don't protect highlights with sony 8-bit... well...

    Hey Mark, I'm really glad you found it helpful!  It was kind of fun to take a crack at this type of grading since it's not a look I normally do, but it's nice and certainly has some utility. 

    I don't know if you use Resolve, but I'll attach the powergrade that you are free to use.  Otherwise, I'll go through my basic approach and thought process on this.  Finding a general system that works for me, helps me compartmentalize what I'm doing.  Overall though, I try to keep things simple and generally avoid secondaries or masks unless I'm trying to fix problems in footage.

    I laid everything out sequentially because I'm not sure what package you use, but this is my general order of operations.  If you are using Resolve, you could potentially combine the WB(White Balance), Exposure, Contrast, and Saturation nodes into a parallel operation.  I think everything else depends on the previous, although it's all personal preference.  You could potentially throw the contrast pop, vignette, and glow into one huge parallel operation with the other ones.  But, I kind of like working in serial though.  So here's the steps and a brief explanation:

    1. Degrain
    2. Roll Off - This is just a curve I use to roll off highlights before converting to my working color space, so I could preserve a little more detail.  I start dropping highlights from 50% increasing to 100%.  A bit like forcing high dynamic range into an image.
    3. Color Transform - This gets me into my timeline's color space as soon as possible where I prefer to work.  In this case, I set it to sRGB.
    4. White Balance - Used the eyedropper on something I want white in the scene.  Then tweak a little.  Sometimes take multiple samples and average between them.
    5. Exposure - Just gain control until my mids are where I want them.  Then go back and tweak my Roll Off if I want to save more highlights.
    6. Contrast - Just a small amount of Contrast since the ref video was pretty contrasty.  Could use an S-Curve here.
    7. Saturation - Dial in sat after setting contrast or gamma.
    8. Contrast Pop - I like to put a touch of this in when it works.  It's a different kind of contrast.  If you've used Nik tools, this is pretty much the same thing as Tonal Contrast.  Can look weird if dialed too high.
    9. Vignette
    10. Glow - this is just the glow effect filter in Davinci.  I set the threshold to 1 since I was pushing highlights out of range, so it added some glow to anything out of legal range.  Gives a soft edge.  Very low spread.  By default it's really large.
    11. Sharpen optional.  Did a micro amount.
    12. Grain would go last, but I didn't use any on this.

    One note about the power grade: I think contrast pop, degrain, color space transform, and maybe glow are only in the full version of resolve.  You can probably find equivalents or leave them out.  An slog2 LUT could potentially replace the Color Space Transform since I'm compressing highlights before it anyway.

    Real_Estate_001_1.1.1.drx

    Real_Estate_001_1.1.1.dpx

    BTW, IMO the Sony you shot your test footage with is pretty nice.  By degraining at the beginning, it smooths out the 8 bitness of the image and it had more dynamic range than I could use.  I'm sure if I worked with it regularly and graded a variety of shots with it, I'd develop a more robust pipeline to take care of any quirks it might have.  No idea how skins look in it though.

    Still, the Z6 has me curious.  But I'm waiting to see if they add internal raw at some point..


  11. So, I watched the 2Gems Media's video a few times and some of their other videos on their channel.  It's an interesting modern style that he's obviously using to much success.  I wouldn't call it a cinematic style.  He's not afraid to let his whites clip and it looks like he degrains his footage and doesn't add any back, but just leaves it very clean. 

    The most important thing he does is get a nice neutral white balance.  Also, he seem to push overall exposure into the upper range.  I'm not saying he lifts blacks, but his middle exposure area feels higher than normal.  Conversely, for a cinema look, I'd push everything much darker.  I'm sure this is to make a home feel warm and inviting.

    I also noticed he seems to put a soft glow around his highlights-- or he has a filter that does it.  In any case, I put a little glow at the very top of the exposure range.  

    Didn't use any secondaries or keys, or animate any values.  So, I just let the beginning remain a bit green since it's getting the bounce off the walls anyway.  It is a bit of a challenging shot with the mix of light sources and colors, so I just aimed for a fairly neutral white balance that I just tweaked a tad after pulling a white sample off the back window frame.

    Anyway, here's my interpretation of his style.  Let me know if you think I got close.  I included my node graph for the order I did stuff.  Only one operation per node.  Just posting last and first frames.  Last frame first, since I looked at that the most for the hero look.  I feel I could lift the exposure even a quarter to half stop more, but it bugs the hell out of me to be this bright already, and I did try to protect highlights a little more than I think the 2gems guy does.   Must... protect... highlights.... 🤣

     

    ArchitectureGrade01.thumb.PNG.7c23b09404c1cec142187bd170e023f6.PNG

    ArchitectureGrade02.PNG


  12. 2 hours ago, kye said:

    I used to use those Pre-Clip and Post-Clip groups, but I got a bit frustrated with them because you couldn't have clips in multiple groups.  Resolve has now gone one better and has Shared Nodes, which means you can combine treatments in any way that you feel you might want to.

    I always think of the example of shooting two scenes with two cameras.  You obviously want to grade the cameras differently to match them so you want all the shots from each camera to share the same processing.  Now they all have the same kind of look, you want to apply a creative grade to them, and you actually want to grade the two scenes differently as they have different dramatic content.  Previously you could use the grouping to combine the processing of either the cameras, or the scenes, but not both.  Now with the shared nodes you can mix and match them however you like.

    Yes, shared nodes are really useful for making a scene adjustment ripple across all shots in a scene.  It's something more useful to me in the main grade after I get everything in my timeline's color space.

    For me, what I like about pre and post-clips is that I typically have 2 or 3 nodes in my pre grade and the purpose of my pre-clip is just to prepare footage for my main grade.   For example, a team I work with frequently really likes slightly lifted shadow detail, so I'll give a little bump to shadow detail then run the color space transform in my pre-clip.  If one camera is set up really badly one day and I need two different pre-clips for that camera, I'll just make multiple incrementing numbered groups for that camera, so I've never had a reason to put a shot in multiple groups. The other thing, I really like about groups is that you get a little colored visual icon of all shots in a current group that appear on the thumbnails in the timeline.  This makes for a nice visual sanity check when I'm scanning through a ton of footage on a long project.  Usually, the camera used is fairly obvious from A cams, to drones, to body cams by the thumbnail on the timeline so the visual reference of thumbnail and colored group icon is a nice check that I've prepped all my footage correctly.

    I know there is some extra flexibility in putting grading nodes before or after a color space transform, but for me on a large project, my main purpose in the pre-clip is to just get things close and into the proper color space.  If I really need to do more adjustments that have to be done pre-color space transform, I'll flip around color spaces in my main grade.  But my goal is to do all my shot to shot and scene balancing in my main grade with everything in my delivery color space.  Keeps things sorted for me.  😄

    Ultimately, it all depends on the type of work you are doing.  If I was doing feature work that is all shot on one camera type my system wouldn't be very useful.  But I do a lot of doc work, and outdoorsy adventure stuff that are typically shot on all types of cameras and conditions, so it can be really useful for keeping things organized.

    One last trick with the groups is that if I'm also mixing 6k, 4k, and 2k footage, I can throw a little sharpening or blurring into the post-clip section to match up visual detail between cameras.  Then use the timeline grade to do any overall finishing if needed.

    Ultimately, Davinci is just a wonderfully flexible system for developing custom workflows that works for you.  I love that their are so many ways to organize and sort through the color process.  👍


  13. Just wanted to point out that the OP is shooting on a GH5 and not a GH5s.  They have totally different sensors and very different noise patterns.

    I personally really like the GH5 noise.  At 1600 ISO and less it has very little color noise.  I actually really like the way it looks at 400 and 800 ISO... feels organic.

    Also, the video posted above is a perfect example of why I believe you should analyze footage using a view LUT.  V-LOG maps black to 128 out of 0-1023 which I believe is higher than any other manufacturer.  You can take any camera and lift it's blacks 10% and see all kinds of garbage.  That said, the fixed pattern and color noise of the GH5s is not ideal.


  14. 4 hours ago, kye said:

    I used to try grading the LOG footage manually, without having anything convert between the colour spaces and I'd create awful looking images - using a technical LUT or Colour Space Transform to do the conversion really made a huge difference for me

    A big +1 on this for myself as well.  Some people seem to get good results just pulling log curves until they look good and can get get nice results, but I find if I handle all the color transformations properly, I'm reducing the number of variables I'm dealing with and I have the confidence that I'm already working in the proper color space.  Once in the proper color space, controls work more reliably, and it is also a big help if you are pulling material from a variety of sources.  

    I have not tried the Aces workflow, but since I'm pretty much always delivering in rec 709, I like to use that as my timeline colorspace.  So, I just convert everything into that.

    One feature I also really like about Resolve is the ability to use the "grouping" functionality that opens up "Pre-Clip" and "Post-Clip" grading.  Then I group footage based on different cameras, and I can apply the Color Space Transform and any little adjustments I want to make per camera in the "Pre-Clip" for that group/camera.  That way when I get down to the shot by shot balancing and grading, I already have all my source material happily in roughly the same state and I can begin my main balance and grade with a clean node graph.  On a typical project, I may have material from a variety of Reds with different sensors, DSLRs, GoPros, Drones, and stock footage.  If I had to balance everything shot by shot by just pulling on curves, I think I'd go crazy.

    If you don't work in Resolve, you can do roughly the same thing by using adjustment layers in Premiere and just building them up.  Use the bottom adjustment layer to get all your footage into Rec 709 with any custom tweaks, then build your main grade above that. 

    Even if you are not working from a huge variety of source material, many projects will at least have Drone and DSLR footage to balance.  You can then develop your own LUTs for each camera you use, or just use the manufacturers LUTs to get you into a proper starting place.

    One final advantage if you can use the Color Space Transform instead of LUTs is that LUTs will clip your whites and blacks if you make adjustments pre-LUT and go outside the legal range.  The Color Space Transform node will hold onto your out of range color information if you plan to later bring it back further down the line.


  15. I watched a video a few times on my laptop, but the noise didn't look unusual.  I think you're just seeing a the noise in the lifted shadows.  Log is not a normal viewing format.  That is why view luts are typically created for monitoring on a set or in camera.  It is supposed to be adjusted to your delivery format (rec709, rec2020, DC3, film print) before final viewing. 

    It's sometimes described as a "digital negative" in that like a film negative it holds a wider dynamic range than a final film print.  But when you view it without a display LUT, you are seeing all the gorey details that shouldn't be present or at least very suppressed in your final grade.


  16. If you ever end up with some of that in a shot you want to use, you can suppress it pretty well by using a soft mask around the problem area, key the purple and desaturate it.  Best to just avoid it if possible 😉or use a better lens if you have one when shooting high contrast.  


  17. I think the most distracting element in the sample footage is the purple fringing and pulsing focus.

    Log color profiles have very lifted shadows, so you will see a lot of noise in them when looking at it without a view LUT or grade.  Normally, you would apply a LUT, or an S-curve to push your shadows to near black and roll off the highlights.  Once the shadow area is compressed, you wont see as much noise, if any.  If you are going for a flat look, you will probably want to run some kind of denoise on the shadows to clean them up.

    The advantage of LOG is that you have the option to compress shadows and highlights and have more flexibility in how much you want to lift them.  If you shoot with a standard profile its much harder to adjust shadow detail... if there is any at all.


  18. 12 minutes ago, Castorp said:

    I didn’t find that slanted lens comparison of Z6 vs A7iii much better than the rest of the YouTube stuff.

    I’m not sure the Nikon is over-exposing. Isn’t it more that Sony (and Fujifilm) are underexposed? They had to raise the exposure in post of all the Sony images since they weren’t correctly exposed with the lightmeter reading. 

    Looks like they’re shooting the Nikon with its default sharpening (which is too high).

    The Z 6 colour and white balance is to my eyes far nicer than the Sony.

    PS And Canon too for that matter. I never liked Canon colour - much to warm and glossy-feeling. 

    The thing about the Slanted Lens comparison for me was just that it was obvious at regular exposures, the Nikon was clipping whites earlier than the Sony and general consensus seems to be that they are using a very similar if not the same sensor.  Maybe the same sensor thing is wrong though.

    I do think that clipping or overexposing highlights does give a video-ish look.  You may be right though that the Fuji and Sony cameras underexpose on purpose to help protect them.  

    It is interesting in the test how much detail in the shadows could be recovered by the Nikon.  That's been my general experience with Nikon.  I *think* I also remember some extra highlight superwhite detail that could be pulled out of the Nikon video image once it was brought in to grade, so that may be what accounts for the clipping. But I could be totally misremembering.

    I agree Canon footage renders skin too warm and pinkish, but people seem to like that and they seem to give pretty good results out of the box without adjustments.


  19. Ahhhh, "Motion Cadence".  I love it when that old chestnut gets pulled out regarding a cinematic image.  I've spent many days in my career tracking and matchmoving a wide variety of camera footage from scanned film, to digital cinema cameras, to cheap DSLR footage.  So I find the whole motion cadence thing fascinating since I sometimes spend hours to days staring at one shot trying to reverse engineer the movement of a camera so it can be layered with CGI.

    So leaving out subtle magical qualities visible to a select subset of humans who have superior visual perception, or describing it like the tasting of a fine wine, I can only think of a few possible reasons for perceptible motion cadence.  I'll lay them out here, but I'm genuinely curious as to any other factors that may contribute to it, because "motion cadence" in general plays hell with computer generated images that typically have zero motion cadence.

    #1  ROLLING SHUTTER.  The bane of VFX.  Hate it!  Hate it! Hate it!  For me personally, this must be 90% of what people refer to as motion cadence.  Plays hell with the registration of anything you are trying to add into a moving image.  Pictures, and billboards have to be skewed and fulling rendered images slip in the frame depending on whether you are pinning it to the top, middle, or bottom of the frame.  I work extensively with a software package called Syntheyes that tries to adjust for this, but it can never be fully corrected.  For pinning 2d objects into a shot, Mocha in After Effects offers a skew parameter that will slant the tracking solution to help compensate.  This helps marginally.

    #2 Codec Encoding issues.  I have to think this contributes minimally since I think extreme encoding errors would show up more as noise.  I've read theories about how long GOP can contribute to this versus All-I, but I've never really noticed it bending or changing an image in a way I could detect.  I'd think it would be more influenced by rolling shutter however, so I can only think it would contribute to like 5-10% of the motion cadence in an image.  Would love to know if I'm wrong here and if it's a major factor.  More than just casual observation, anything technical I could read regarding this would be welcome.

    #3 Variable inconsistent frame recording.  This is what I think most people think they are referring to when they bring up the motion cadence of a camera.  But outside of a camera doing something really bizarro like recording at 30 fps then dropping frames to encode at 24, I can't believe that cameras are that variable in their recording rates.  I may be totally wrong, but do people believe that cameras record frames sometimes a few milliseconds faster and slower from frame to frame?  Does this really happen in DSLR and mirrorless cameras?  I find it hard to believe this would happen.  I could see a possibility of a camera waiting on a buffer to clear before scanning the sensor for the next frame, but I can't believe its all that highly variable.  If it is really that common wouldn't it be fairly trivial to measure and test?  At the very least some kind of millisecond timer could be displayed on a 144 hz or higher monitor and then recorded at 24p to see if the timer interval varies appreciably.

    #4 Incorrect shutter angle.  This could be from user error.  I've seen enough of it on Youtube to know it's common.  I'd assume it's also possible that a camera would read the sensor at a non-constant rate for some reason, but I'd think that would show up in rolling shutter anomalies as well.  Dunno about this one, but think it may be more of a stretch.  Should also be visible on a frame level by looking for tearing, warping, or some kind of smearing on the frame.  So, I doubt this happens much, but it should be measurable like rolling shutter with a grid or chart and detectable by matchmoving software the way rolling shutter is.

    That's generally all I can think of, and without any kind of proof, I'm calling bulshit on #3, but I'd be happy to be proven wrong.  I'd be genuinely intrigued to find that some camera's vary their frame recording intervals at any amount visible to the human eye.

    If anyone has any real insight into this, I'd love to read more about it because it directly affects something I deal with frequently.


  20. Regarding a video-ish look to the Z6 footage, I found the following video interesting.  Nikon seems to overexpose by about a stop or so compared to Sony.  I remember when I used to shoot with a d5200 a few years back I got in the habit of underexposing by a stop or two and pulling up my images in post.  Considering clipped or blown out highlights is one of the major contributors to a "video look", my guess is that Nikon's default exposure levels may be to blame for this opinion among some shooters who shoot mostly full auto.

    Luckily the easy solution is to underexpose and balance the exposure in post.  I remember my old d5200 footage graded really well in post as long as I protected the highlights. It produced a really nice, thick, cinematic image with some minor corrections in post.  That thing was a gem of a camera for $500.  But yeah, if you shot standard profile and just auto-exposed it, the result was pretty crap.  Nikon even let you install custom exposure curves into it so you could tweak the rolloff for highlights and shadows.  I remember we intercut it with a $50,000 cinema camera on a few projects and nobody was the wiser.  😄

    While, I know there are a lot of really talented filmmakers who experiment, measure, and adjust their workflow to wring every last drop out of the images their camera's make, I think there are a lot of folks who shoot "full auto", drop their video into a timeline and become disappointed by the result.  Hell, one youtuber's whole approach is that he's just an average guy who shoots everything at pretty much default settings, and then he does camera reviews.  Nothing wrong with that for the target market I guess.  Here you can see him blowing out the sky and background in his video while the exposure auto-adjusts.  Nikon needs to work on getting better default settings into their cameras to help support this crowd.  I think it's a weak spot for them from the video reviews I've been seeing because I know with even just a little effort they can create a superb image.

    Canon cameras also produce great images, but one of Canon's strengths, IMO, is that their default out of the camera results are really exceptional.  Whoever does the final tweaking for their cameras seems to be really good at ensuring the default settings come out of the camera looking really nice.


  21. Yeah, to be honest now that I think on it more, just because they are hitting 100 IRE on the waveform, who knows what's going on in the camera's color profile.  There could be surperwhite data, or some other special sauce that can be recovered above 100 IRE.

    7 minutes ago, thebrothersthre3 said:

    If you look at the brightest portion of the chart on all four cameras none of them really match. That said I don't know really how it works. 

    Yeah, I'd expect them to at least try to match the image output a little better.  Still curious....

×
×
  • Create New...