Jump to content

stephen

Members
  • Posts

    186
  • Joined

  • Last visited

Everything posted by stephen

  1. Sigma FP caught my attention. It is more compact and has a better form factor than BMPCC 4K. You can add a small lens and put the camera on something like Feyu Tech G6 Max and have a very lightweight gimbal setup. And can take much better stills. It is tempting as a travel cinema camera. As bjohn stated in another thread will wait and see what Sony and Canon will offer, but Sigma FP suddenly climbed on the top of my cinema camera wish list. 🙂 Found a very interesting article by Timur Civan. He proposes a workflow in Davinci Resolve which can extract 1-2 stops more dynamic range from Ciname DNG and claims Sigma FP image quality is on par with Panasonic S1H. And those extra stops are in the highlights. https://timurcivan.com/2020/06/an-examination-of-sigma-fp-raw-workflow-and-how-to-get-the-most-from-the-fp/?fbclid=IwAR3ga9HcmrnVdSr1R9V4Vb18_A5_BUsjeLYIPV33MF-2lic1sdgIDoKrb1U After all both cameras share the same Sony sensor. Basis of Timur’s method is the trick Juan Melara shows in one of his videos: Insider Knowledge - A better way to grade Ursa Mini CinemaDNGs We had a discussion about it in another thread with Kye. It looks my understanding of this video is quite correct. Basic idea is to choose a larger color space and gamma when debayering RAW footage and specifically Cinema DNG in Resolve (camera settings part). For the rest Kye was absolutely right. Once in Resolve any color space transformations have no impact on dynamic range. Having in mind that with SlimRaw you can compress Cinema DNG 3:1 (lossless compression) and even more 5:1, 8:1 with lossy compression (similar to BRAW), all you need is just 1,2 more SSDs in the bag. Blackmagic Video Assist recorders would be useful in a more professional environment. For travel SSDs would be more than enough. Will play with some Sigma FP footage from the net. Would be grateful if somebody knows where can download more RAW footage from Sigma FP.
  2. Hmmm... Am not convinced this is true. And am not sure what non-destructive means. Everything that watch and read from professional colorists including Juan Melara's video point in the opposite direction. (HLG -> rec709 -> HLG = rec709) Let's check Melara's video. He concentrates mainly on dynamic range (Gamma). Whole point of the video can be summarized like this: URSA Mini is capable of 15 stops of dynamic range. But it looks you don't get all those 15 stops because BMD gamma is able to hold only 12 stops effectively cutting some of the dynamic range of the camera. Later he proposes two methods / solutions: 1. Play with curves. (But you still play with 12 stops of dynamic range at input) 2. Choose Gamma=Linear. It looks Linear container(value for Gamma) can hold more stops of dynamic range. And you get full range of 15 stops, which yields better result at the end. At 1.25: "My theory is that BMD 4.6K film curve as a container is actually not able to hold the entire 15 stops of URSA Mini..." About color space: At 2:20: "Normally I'd recommend to output to the largest color space available and you can use P3 60... To keep things simple I'll choose REC709..." In my understanding he says: Get the widest color space you can. But here for simplicity I choose REC709 (narrower) because my main point in this video is about Gamma (dynamic range). Now if we assume what you say is true, then BMD Gamma should hold all values of URSA Mini dynamic range (15 stops). Why then go to Linear ? It would have been sufficient to choose it in color space transform and it would reveal the whole 15 stops. But that's not what Melara is saying. He's saying exactly the opposite. He is saying that BMD Gamma as container (Gamma value) is limiting dynamic range of URSA Mini sensor. And REC709 gamma has even less stops of dynamic range than BMD gamma ! Now let's go to HLG color space and gamma. As you said there are several approaches / methods to color correct and grade. Let's compare the one that plays with curves, saturation etc. with Color Space Transform. When you place a GH5 or Sony A7 III HLG or LOG clip on timeline, Davinci assumes by default that your clip is REC709 color space and gamma. If you don't tell Resolve the color space and gamma your clips were shot it doesn't know and assumes REC709 (same as is your timeline by default). But REC709 color space and gamma are much more limited than REC2020 color space and REC2010 Gamma that HLG clips are. By doing so you effectively destroy the quality of your video. No matter what you do later, curves saturation, LUTs, your starting point is much lower. Yes at the end it's always REC709 but everything so far points that you loose quality when correct conversion was not done because you don't use the full range of colors and dynamic range your camera is capable of. That's why they say that color space transform do this transformation non destructively. At least that's my understanding. But may be wrong and am curious to hear others opinion. If your scene has limited dynamic range you may not see a difference between the two methods. But in extreme scenes with wide dynamic range it does make a difference. Same for colors. Some color spaces approximate nicely to REC709, others (Sony S.Gammut) don't. There is no surprise people are complaining about banding, weird colors and so on. Most of those problems could be resolved with Color Space Transform even for 8bit codecs footage. Second problem with using curves, saturation etc. is that this method is not consistent from clip to clip, between different lighting, etc. And involves a lot more work to get good results. At least that's my experience. Have to tune white, black point for each clip individually, then saturation etc. Change one color, another one goes off. Can't apply all settings from one clip to all especially when shooting outdoor in available light and different lighting conditions. It's a lot of work. Just read how much work was put to create Leeming LUTs, how much shots had to analyzed, etc. With Color Space Transform it takes me 3 to 5 min to have a good starting point for all my clips on the time line. Apply CST on one clip, take a still grab then apply the still (and CST settings) to all clips. And almost all of them have white black points more or less correct, skin tones are OK etc. It's much faster method. It was a game changer for me With BMPCC 4K clips can get away with the first method and not spend tons of time because Davinci knows quite a lot about their own video clips, when you place them on the timeline. It's much better than GH5 or Sony HLG. It is for those 8bit compressed codecs where CST method shines the most. Again at least that's my experience. Color matching BMPCC 4K BRAW and Sony A7 III 8bit HLG is for me now easier than ever.
  3. @Ki Rin It is in principle correct. Davinci Resolve timeline by default is REC709 color space and gamma. Same for Adobe Premiere. When you drop a clip Davinci assumes it is in REC709 color space and gamma. So without some form of transformation colors are not correct. You have to tell Davinci in what color space and gamma was your video clip shot in order to get the correct colors and gamma and use max quality of your source material. That's the principle, well explained in Russian video. As said one way is to use LUTs for transform, the other way is to user Color Space Transform or ACEs. Goal of all 3 methods is to have a good starting point with correct interpretation of colors. For me Color Space Transform gives me best results and is the easiest. Makes matching BMPCC 4K clips shot in BROW and Sony A7 III clips shot in HLG relatively easy. That's my experience. @kye Don't agree with your logic. Same as saying: Don't like apples so figured that won't like oranges as well They are different. Second, let's be clear and precise. REC2020 and REC2010 are Color Spaces values in Color Space Transform effect and in general. REC2010 HLG and REC2010 ST2084 are gamma values. Now if you transform from REC2020 which is much wider color space to REC709 which is much narrower one (than REC2020) then go back it's no surprise result will be different (and worst). You basically destroyed quality of your video source. HLG -> REC709 -> HLG = REC709 colors. There is no way to get back colors which are absent in REC709. Same is true for Gamma. HLG gamma has 12 Stops of dynamic range ( given your camera sensor has this range). REC 709 6 stops, some sources claim up to 7 Stops. Now once you converted your video to REC709 gamma with 7 Stops there is no way to get back to 12 Stops. Dynamic range has been already destroyed. True at the end your video on the timeline in only REC709 color space and gamma, but one way you start from much wider color space and gamma and the other way you start with same limited REC709. In my understanding that's exactly what color space transform workflow tries to avoid. Here is a video from Juan Melara which goes on the opposite direction. By using Color Space Transform to wider color space and wider gamma he claims is able to get more dynamic range from BM Ursa Mini cinema DNG footage: Am no expert in color grading/correction but Juan is.
  4. PS. As video shows method 3 is usable only if you shoot with Log profile or RAW. Only is this case Davinci Resolve will know how to interpret correctly the colors. For Panasonic cameras those would be V-Log and HLG and for Sony S-Log2, S-Log3(to avoid) and HLG (best). If your Sony camera doesn't support HLG, then only option would be standard S-Log2/S.Gammut Picture Profile. Or some kind of hybrid profile where Color Mode is S.Gammut and gamma is Cine2 or Cine4. But you'll have to experiment. Have no idea how footage will look and in Resolve there is no value that corresponds to Cine2 or Cine4 gamma. You'll have to try several options for input Gamma. Not sure how this will work. For Panasonic Color Space transform for GH5 with HLG will look like these in Resolve Input Color Space: Rec2020 Input Gamma: Rec.2100 HLG Output Color Space: REC709 Output Gamma: REC709 Tone Mapping : Luminance Mapping For Sony S-Log2 picture profile it would be: Input Color Space: Sony S.Gammut Input Gamma: Sony S-Log2 Output Color Space: REC709 Output Gamma: REC709 Tone Mapping : Luminance Mapping Sometimes I play with output gamma. Can use Gamma 2.2 or Gamma 2.0 instead of REC709. Default for Resolve is Gamma 2.4 If you can use X-Rite Color Chart during the shoots you will be golden. Color matching footage from the 2 cameras will be really easy.
  5. It's a very interesting question. There are 3 methods how to approach the task and those 3 methods are actually different ways to structure your color correcting/grading workflow in Resolve. Color matching two different cameras has 2 steps: 1. Correctly interpret colors in Resolve for each camera. 2. Color match them if there are differences. Hint. If the first step is done correctly you will have very little to no work on the second one. There are plenty of videos on youtube but all they go straight to the second one missing the crucial first step. One or more cameras most people don't know how to do correctly step one. That was my case until recently. Method 1: LUTs and more specifically Leeming LUTs PRO. If you shoot with each camera with the settings given by the author and then apply the corrective LUTs in Resolve for each camera you will get correct colors from both cameras. They should look the same. Panasonic GH5 and Sony A7 series are supported. Tried this on BMPCC 4K and Sony A7 III and it works reasonably well. But not always and setting the cameras each time correctly require some work and attention. https://www.leeminglutpro.com/ Few professional colorists publishing tutorials on youtube advise against LUTs for color correction. Method 2: ACES Used ACES method in the past to work with Sony S-Log 2 video and results were great. Method 3 - Color Space Transform effect in Resolve. This one was game changer for me and the method currently use as a first step for color grading/correction. Including when footage is from different cameras. My productivity in Resolve jumped at least two times and results are great. Wish I new this before ! The best video which explains the method and probably the most important part of color correction in Resolve is unfortunately in Russian. But you still will be able to pick up the principles and settings for both cameras. This guy is a ex professional colorist. Never seen anybody with tutorials on youtube explain the theory and practice so logically and simply. Apart of the color space transform values for input and output color space and gamma, you should also set Tone Mapping to Luminance Mapping. That's very important. Once the color space transform is done on one clip do a stills grab and then apply it on all clips from the camera with few clicks. Same process for the clips of the other camera(s) All 3 methods require to be well aware what picture profile you used during the shoot. For Sony Cameras this colorist advise to use HLG. If you don't have HLG then you should use color profiles or create your own color profiles having Color Mode setting = SGammut. Because it is not clear what color space are Sony PP with Color Mode = Cinema, Pro, Movie. Hope this helps.
  6. In my experience 8:1 and 5:1 BRAW are OK for Sandisk Extreme Pro. My clips are short 10 sec to 2 min but so far never had a problem recording 5:1 4K DCI 24 fps on a fast Sandisk SD card.
  7. 220 ms latency, measured. It's there but can live with it. Don't have AccSoon Cineye, so can't compare.
  8. Bought Zhiyun weebill s image transmission module for 150E. In can be purchased and used separately from the gimbal. Works with any camera having HDMI out. Nice monitoring solution for BMPCC 4K. For 150E + my Samsung Galaxy phone or an older Samsung Note 4 that don't use got very nice OLED monitor with great colors, 800-900 nits of luminance and touch screen interface. All major tools, like peaking, histograms, zebras, false color are available. Even more interestingly, it communicates with the camera and you can set all major parameter like white balance, ISO etc. from the touch screen. So it is a monitor but also a controller. A separate USB cable is used for the controlling functionality. Quite a nice alternative to much more expensive specialized monitors. And you can change easily the screen size too
  9. Samsung T5 use TLC V-NAND memory/technology so problems related to SSD drives with QLC are not relevant. Small rig holders for Samsung T5: https://www.amazon.com/SMALLRIG-Bracket-Holder-Samsung-Compatible/dp/B07KW2B5C1/ref=pd_cp_147_1/147-3534738-4745748?_encoding=UTF8&pd_rd_i=B07KW2B5C1&pd_rd_r=4ba7f659-830e-45de-ba28-1fd2420d925b&pd_rd_w=k7mbp&pd_rd_wg=en7fQ&pf_rd_p=0e5324e1-c848-4872-bbd5-5be6baedf80e&pf_rd_r=5BBN4SQPTYQWJG8XN8HP&psc=1&refRID=5BBN4SQPTYQWJG8XN8HP https://www.amazon.com/SMALLRIG-Bracket-Samsung-SanDisk-Compatible/dp/B07X2PHMCF?SubscriptionId=AKIAILSHYYTFIVPWUY6Q&tag=duckduckgo-ffab-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07X2PHMCF Tilta has focus side handle with battery and Samsung T5 holders. Quite convenient if you are OK with second handle. https://tilta.com/shop/bmpcc-4k-side-focus-handle-with-r-s-sony-f970-battery/
  10. A guy told you. Samsung T5 portable SSDs which are among the approved ones are 90$ for 500Gb and 137$ for 1Tb on Amazon. And that's cheap for me, cheaper even than SD cards per GB. https://www.amazon.com/Samsung-T5-Portable-SSD-MU-PA1T0B/dp/B073H552FJ/ref=sr_1_3?crid=1HC73MBGQQW2C&keywords=samsung+t5+portable+ssd&qid=1575587372&smid=ATVPDKIKX0DER&sprefix=samsung+t5%2Caps%2C252&sr=8-3 And on approved media BMPCC 4K has no limitations By the way tried a Micron M2 SSD in an enclosure just because of the smaller dimensions. And BMPCC 4K didn't recognize the drive. But read other people were successful using non approved SSDs. So it's up to you. I use Samsung T5 and SD cards. BMPCC 4K can record BRAW 5:1 on a SD card and that's good enough for 90% of what I shoot. For the rest there is Samsung T5 SSD.
  11. 1. You can't use EOS EF-S (APS-C) Canon lenses on any speedbooster or Canon full frame camera. They have plastic part that protrudes deeper then normal full frame EOS EF lenses. Some people modify - cut the protruding part or simply pop up the plastic part in the back of the lens in order to mount them on full frame camera. I've done it on cheap EF-S 18-55mm f3.5-5.6 but won't do it on expensive EF-S 17-55mm f2.8. https://en.wikipedia.org/wiki/Canon_EF-S_lens_mount Solution is to use third party lenses, like Sigma or Tamron, they have standard EF mount and fit without problem. 2. Viltrox speedboosters are cheap but often problematic. That's my conclusion. Have exactly the same combination as The ghost of squig - Sigma 17-50mm f2.8 + Viltrox EF-M2 II and it doesn't work. Tried BMPCC 4K firmware 6.2 and 6.6 and Viltrox firmware 2.3 , 3.3. Neither stabilization nor aperture work in any combination of firmware. So maybe my Viltrox is defective because it can't even change aperture of any other Canon lens I have (24-105mm, 50mm, Tokina 11-16mm) except Canon EF 35mm f2 (old version). By the way there is new Viltrox firmaware 3.4 but can't test it now. Solution for me is unfortunately to buy the much more expensive Metabones speedbooster. Otherwise BMPCC 4K is great for me. It has it quirks, they are well known but there are ways to get around them. Like and use the camera a lot.
  12. Have this lens. Image stabilization works very well. It's controlled by the camera and by default is ON. You can turn it off in the menu (Setup). Am sure was mentioned before but IS on BMPCC 4K works only when you start recording if you are using the internal battery. Obviously this is made to preserve the battery which doesn't last long. If you use external battery IS works all the time before recording and during recording. During firmware upgrade was also stuck at 70% but after some time camera said upgrade finished and it works OK. That was my second try. First try disconnected by accident the USB cable. But after re connection was able to start upgrade from scratch and finish it successfully. So in my understanding firmware upgrade is well thought out. But new firmware 6.6 broke (again) Viltrox speed booster functionality. Aperture and IS don't work with Viltrox firmware version 2.3 neither with version 3.3.☹️
  13. Presets could be used to speed processing time. Don't shoot weddings but had similar discussion on a local forum. A pro photographer using Sony for weddings said that he uses presets for all images. 20-30 of the best photos he process individually. That's exactly what I would do too. He has clients in Switzerland and Germany and shoots with Sony exclusively. If you like Canon colors out of the box and/or save time is post then of course use Canon. Have both Canon and Sony cameras. 3 months ago got EOS M and it's my 2nd Canon camera RAW ML video is very good - Canon colors, setting white balance in post, etc. But still have to process, color correct and grade each clip individually in order to get the best image out of it. In the process learned quite a lot how to shoot with this camera and how to process the footage. And it is like this with every other camera. Recently did a small project with Sony A7 III. Search on youtube for EOS M RAW. There are plenty videos shot in RAW. Some are crap, some are OK and some stand out. Same camera, same color science (RAW) but quite different results. Zeek really makes this camera shine. But yes in general getting good colors with Sony is more challenging and requires some research tests and fiddling with the settings, profiles, etc. A7S uses a different sensor and because of it or because of different CS is the worst when shooting 8bit 4:2:0 x264 video. With RAW format for photo don't have a problem.
  14. Here is my grade. Took me 3-4 min more for than my regular workflow. Have no preference in terms of color / color science between Canon and Sony.
  15. For me the easiest way and with best results to color correct and grade Sony S-Log is to use ACES color space in Davinci Resolve. You set the correct imput and output for the project or just selected clips and it does the transformation from S-Log to REC709 for you. And it does it well. Should work with HLG BT2020 color profiles as well. But didn't test it.
  16. It depends what same converter exactly means. Same converter program or same program + same settings. Same converter + different adjustment values in converter + different camera = same colors Have done tests in photo with different cameras and there is no problem to get the same colors for the same scene and lighting with different cameras and even different lenses. For me it boils down to what frontfocus said: And as how we see color is subjective, better starting point is also subjective. So there are some camera+converter combinations that we like more than the others and get us the colors we want and like much easier.
  17. It depends what kind of AF. Tracking AF in video is not working. At least when in ML RAW modes. It's working in photo mode. But don't like it and don't use it. However single shot AF is working in both photo and video including ML RAW. Single shot like: you press the button, camera focuses and signals when focus is achieved. It's kind of slow but it works and focus is reliable. In my settings different buttons are assigned for focusing and exposure metering. Here is how usually shoot no matter if AF or MF is used. 1. check/correct exposure settings 2. Frame 3. Focus 4.Start recording. It's a slow process but works reasonably well. This is not a camera for fast paced shooting. Settings in video are as follows: AF method - FlexiZoneAF square / Focus mode - AF+MF
  18. stephen

    Davinci Resolve 16

    Agree with you on the point that deleting a clip and moving the play head the same length ahead is annoying. This is something they should definitely think about and fix. But other than that, like the new cut page. Did some edits on the weekend - a short 2 minute video composed from approximately 30-40 smaller clips. And a bigger one assembled from 150-200 clips. The two timelines approach, with whole thing on the top and detailed clips at the bottom really helps me cut faster. Guess it all depends on our editing habits. Was able to almost finish the two small projects (total length of 7 min) in one day. Usually it takes much longer and this is the most difficult part for me - choosing the clips, or parts of the clips, figuring out the way to arrange them, composing the whole video. Now this whole process was easier, faster and even fun. Here is a good demonstration how new cut pages helps get this part done faster: Resolve is an incredible piece of software, especially considering the price.
  19. Latest EOS M ML RAW builds are from master chef Danne, here: https://www.magiclantern.fm/forum/index.php?topic=9741.msg208959#msg208959 And you can follow development on the same thread. MLV App is de facto the only application to use as starting point of your editing. It is as an extension of the EOS M ML RAW video for many reasons: - effective way to remove focusing pixels - correctly debayering 5K Anamorphic crop mode. Or pseudo anamorphic is reality but anyway And yes get the latest MLV App and EOS M RAW versions. Ilya keeps adding new features to MLV App and Danne keeps fixing bugs and improving the build.
  20. Cinema 4K is fully supported and working on Samsung Galaxy S10+. Camera 2 API checker application shows full support which was kind of a problem with previous Galaxies. High bit rate and flat profiles are available trough Cinema 4K and other applications. Quality of the video is quite good IMHO. Filmic Pro checker reports everything is supported even log profile, including 1080p@240FPS and Cinematographer Kit with 4K@30 FPS with log profile, the only exception being 4K@60p and maybe Log v2 profile which as far as I understand is only working on some iPhone models (Apple A12 CPU required). Keep in mind that in Europe Samsung Galaxy 10/10+ smartphones come with Exynos 9820 CPU, which according to Filmic Pro they can hack to support better their application. The version with Qualcomm CPU doesn't offer this support. At least that was the case with Galaxy S9. https://filmicpro.helpscoutdocs.com/article/41-samsung-s9-and-s9-filmic-pro-v6-compatibility-guide Optical stabilization is quite good. Basically this is the second best smartphone to buy for video after iPhone XS and XS Max.
  21. Also looking for gimbal able to carry cameras slightly bigger than mirroless - BMPCC 4K, Canon 5D Mark III. And so far Moza Air 2 seems to be the best choice with only 2 drawbacks - weight and size. Moza gives a weight of 1.6kg with batteries. Brandon Li complains in his review about the weight. But in Feiyu AK2000 gimbal review he says weight is OK. According to the specs Feiyu AK2000 with batteries is 1.4-1.45kg. Don't think 150-200g will make such a perceivable difference. And in most other reviews Moza Air 2 is measured to be close to Ronin S. Time for Moza to come with Moza Aircross 2
  22. Title Color Science Means Nothing With Raw... Really? is misleading. At least when related to the test you are using as a show case. Three different cameras were used and while the sensor is the same, 4 different codecs / formats were used and only one of them is RAW. As author of the test states bellow the video: All clips from the E2 were shot in 10bit 4:2:0 H.265 ZLog. The GH5s was shot in 10 bit 4:2:2 H.264 VLog. The BMPCC 4K was shot in both 12 bit Cinema DNG RAW 3:1 and 10 bit ProRes HQ (which means 4:2:2 color) So this video test proves nothing in terms of RAW video simply because only one of the cameras in some of the clips shot RAW. Cameras use different processors and electronics, different codecs so differences in final image are normal, expected even in cases when the sensor is the same. Even if those camera shot RAW it's reasonable to expect final video NOT to look identical. If the goal was to color match them, the easiest and correct way would be to use color chart. Am sure after color correction with color chart the image will be pretty close if not identical. A reference point (color chart) is needed when matching is the goal as more or less this guaranties you have correct colors in post no matter what lens, codec, camera or sensors are used. Did some test for RAW photo and with color chart can match almost perfectly images from different cameras in different lighting conditions. Am sure nobody can tell which camera was used for each image. Anyone can easily do this test and repeat the results. In photo when using RAW so called color science of the camera really means nothing. That's a proven point for me. Video cameras shooting RAW don't quite use RAW. There is always some alteration of the data (image). Like applying LOG gamma curve, using a particular codec, color space etc. We discussed this in the other topic around the Tony Northrup tests. So IMHO whatever you call it color science or not it's logical to expect cameras using the same sensor to yield slightly different images. And cameras using different sensors, processors, codecs, etc to give different images. Difference is additionally complicated by the fact that color correction for video is more difficult and complex due to different codecs, color spaces, LOG gamma etc. Not many people outside the professional editors and colorists learn and put the effort to master the process. Additionally differences in dynamic range and other parameters of the sensors also play a role to the perception of the image. But with some more efforts and skills it can be done same way as in photo. Using color charts using correct color space transformations, etc. And matching could be extended even for cameras that don't use RAW codecs but much more limiting h264 8 or 10 bit 4:2:2 ones. Did some initial tests and plan to do more. Am confident it can be done and there are plenty of clues around. - Zacuto did a test years ago and on big screen highly skilled cinema and video professionals in a blind test were not able to tell the difference between Panasonic GH2 with 1080p and x264 8 bit 4:2:0 codec and Arri Alexa 2.5K RAW - in the movie Man from U.N.C.L.E various cameras were used. Here is the full list: https://www.imdb.com/title/tt1638355/technical?ref_=tt_dt_spec Among them Canon 5D Mark II and Go Pro 3. Can you guess which scenes were shot with Canon 5D Mark II or even Canon EOS C500 ? I can't. Go Pro 3 yes, using logic where such a camera will be used but for the rest am clueless. Look at this guy test. - Arri alexa vs Canon T3i. Yes there is difference in colors but they can be made quite similar, quite close So bottom line is, it all boils down to how easy and with less effort we are able to get the colors we like. And how much we can afford to pay. Our preferences to a camera and certain "color science" are purely subjective. Which kind of contradicts the science part
  23. MLV App works on Mac OS and allows to export in any format including ProRes. Application supports lossless compression. New features are constantly added and is well supported: https://www.magiclantern.fm/forum/index.php?topic=20025.0
  24. stephen

    Color science

    Fully agree with TheRenaissanceMan and you. We are on the same page here. How easy or difficult it is to get the color we want makes huge difference. My comments about RAW were mostly for photo. Tony's test is mostly for the photo side of hybrid cameras. Adding video in the bag is not correct, because: a. as you point out most DSLR mirrorless cameras shoot 8bit 4:2:0 which is quite different and limited compared to RAW and even JPG. Getting the colors we want may be quite difficult and time consuming. b. RAW video is not = to RAW photo. Most if not all cameras shoot RAW video but in log format. Which means some kind of additional processing related to colors is applied in camera. It is not the RAW image as it comes from the sensor. And cameras from different companies have different log formats, even the sensor is the same. For example Panasonic GH5S and Blackmagic Pocket Cinema Camera 4K. Same sensor, different log formats. In both cases there is some form of in camera color processing/manipulation. We still have the problem measuring those colors in RAW, we still have to go with the development process before any measurement is possible. But it would be correct to say there is in camera color science and is logical to expect that it will affect the end results. Tony's argument that people will not notice and object slight variation in color and sometimes even big color shifts also is not correct for video. We still want to get the colors we want and even we ignore this we still have to color match different sequences under different lighting condition in one movie/clip.
  25. stephen

    Color science

    IMHO the strength of the RGB CFA's over the photo sites is related more to the exposure than to color. Stronger filter = less photons reaching the photo sites. But still one pixel (photo site) = one basic color (R, G or B). The other two still need to be generated/interpolated in software in order pixel to have all RGB values. There is no need to theorize in that much details. Everyone can do simple test. Take a photo of an object with one single color (in RAW). Let say a blue ball. Import the picture in photo editing program. Camera doesn't matter. If color of the object was BLUE can make it GREEN or RED or any other color. That’s why said RAW has no color.
×
×
  • Create New...