Jump to content

Dan Sherman

  • Posts

  • Joined

  • Last visited

Posts posted by Dan Sherman

  1. God I love when new cameras come out, watching you artistic types getting all bent out of shape about it is incredibly entertaining!

    It seems half of you think every new camera makes anything that came before it absolute crap. The other half rigorously proclaims their current or favorite camera is the best.

    It's like the battle of malcontents vs the perpetually insecure.


  2. 5 hours ago, IronFilm said:

    Yes, but I'm not recommending that people start out with their first wireless be something which is over 8 times more expensive than a RodeLink. Not at all, I recommend instead better products (Sony UWP-D11) which are still in the same general price bracket.

    Sony is a brand I've hated since the first generation Discman. Everything they make seems to come with a ridiculously high "Sony" tax. I refuse to knowingly buy anything they produce at this point, unless absolutely necessary.

    For the price of the Sony kit, I'd just buy a sennheiser g3 instead.

  3. 25 minutes ago, no_connection said:

    No it's correct. If you don't transform the data when you go from one color space to another you are changing the colors.

    If you transform from a larger colorspace into a smaller one and keep colors you will clip the colors outside it. Your analogy about 16-235 is 100% wrong as there is no way to address colors outside the color space. The red channel will only get brighter, not redder when past 235 for example.

    V-log have it's own color space that happens to fit most other color space. And if you convert that to REC709 it's going to clip unless you change the colors. One way to change them to fit would be to look at them with a normal monitor and the colors would look very flat and desaturated but would not clip.

    Yes you will loose colors if you go from a larger color space to a smaller one but if you use a non linear transform, you can determine where that color loss occurs.

  4. 1 hour ago, interceptor121 said:

    if you squeeze information into a narrower colour space you have clipping.

    Ummm, no just no

    this is just strait up wrong. If you 'squeeze' the information down to fit into a given color space you do not have clipping. This is what V-log does.  Clipping occurs when change color spaces and don't 'squeeze' / adjust dynamic range . 

    for example rec 709 8 bit supports the range of 16-235 (if memory serves), but your camera is capable of cap of capturing from 0-255. If you don't 'squeeze' or in other words decrease the dynamic range of the footage, you will clip the shadows below 16 and the highlights above 235. If you use one of the various log profiles you can keep those shadows and highlights in the source file so that you can manipulate them as you see fit.

    The 'squeezing' and 'de-squeezing' plus grading in post is often what leads to the ugly gradients people complain about with 8 bit.  It also why some people want the higher bit depth and choma sub-sampling options.


  5. 8 hours ago, interceptor121 said:

    I am not unhappy with the camera I was just expecting more out of the 422 10 bit mode

    Please explain/articulate what you mean by more!

    8 hours ago, interceptor121 said:

    From what I can see with my naked eye footage straight out the camera without LOG looks better in 8 bit mode than it does in 10 bit I play it straight out of my Tv that has a 10 bit panel. I cannot see any benefit of 422 or additional colour using cinelike in the rec709 colour space compared to 420 8 bit. I white balance all my clips on a grey card so they are generally not off and look good at the outset.


    The believe that 8bit is better, is confirmation bias tricking your mind.

    The various color profiles (ignoring v-logl for a moment) by nature don't change the amount of color that ends up in the footage, they change how the the color captured by the sensor is transformed. A given color is made brighter or darker, shifted towards red or green etc.


    Your eyes can't even discern the difference between two similar shades of 8 bit color, let alone 10 bit. You can verify this your self, by going into any 8 bit editor, like paint, or photoshop, etc. Draw a big box on the screen, and fillet it with one color say pure red (255,0,0). Mask of half the box so it stays pure red, and then make the other un-masked side (254,0,0). You will not be able to tell the difference between the 2 shades. keep dropping down from 254, to 253, 252 etc until you are sure you can see the line where the color changes.

    If you can tell the difference between 245 and 255 then you need a 10 shade spread to see a color difference. Now you need to realize that in 10 bit that becomes a 40 shade spread. Your ability to discern the difference gets worse with age, and it can also be skewed for a given color if you have any kind of color blindness.

    Dynamic range is a similar affair, the gh5 sensor is capable of a little over 12 stops at base iso. Your eyes are only capable of 10 stops, and again it can get worse with age and various medical conditions.


    8 hours ago, interceptor121 said:

    Using the resources as your disposal as good as you can and knowing how things work is a good thing not a bad thing so am a  bit surprised that people keep going on beating up my quantitative analysis and comparing it with subjective statements

    Your getting beat up because you analysis reads as someone who has a serious lack of understanding about bit depth, chroma sub sampling, and their benefits or lack of, when it comes to video production. it a similar thing when it comes to codecs.

    Right out of any camera 10 bit isn't inherently better than 8 bit, 4:2:2 isn't better than 4:2:0 either. Higher bit depths and higher chroma sub-sampling are only really beneficial if you are going to push the footage around in post. higher bit depth and chroma sub-sampling can be pushed farther before the footage falls apart. This is important when it comes to major motion pictures and the like, because what comes out of the camera is usually drastically different to what ends up on the screen. 

    Codecs are the same, Noe one codec is inherently better than another by default. Prores or dnxhr isn't better than h.264. or h.265, or any of the other million codecs out there. They each have their pros and cons, and what is best is very situation specific.


    2 hours ago, interceptor121 said:

    The sensor is 12 bit and raw does not have a colour space the camera saves files with sensor data and metadata the processing is then done by a program that works in an intermediate color space for editing and correction and then outputs in RGB format on JPEG

    The fact that the camera can work in a colour space as wide as adobe rgb or rec.2020 does not mean it can resolve 10 bit colours

    a pixel may have 8 bit resolution with colours coming from a wider colour space but still not able to resolve 12 bits on a single image

    even a DSLR with a 14 bit sensor stops at 26 bits which is 8 and 1/2 bits and in most cases there is no additional info between 12 and 14 bits RAW in terms of colour or resolution 


    It sounds like you are confusing dynamic range with bit depth.

  6. 1 hour ago, fuzzynormal said:


    Serious question: are people complaining about this tech actually trying to make anything "for reals" with it?

    In my opinion, a large number of creatives are malcontents by nature.

    Give them a Gh5, and they'll complain because it's not as good as an Ursa. Give them an Ursa, and they'll complain because it's not as good as a vari-cam.  Give them a vari-cam, an they'll complain because it's not as good as a Red. Give them a Red and they'll complain because it's not as good as an Alexa 65.

    They are always held back by the preceived limitations of their gear.


    3 hours ago, interceptor121 said:

    The camera does not apply any codec to HDMI out it simply passes the buffer before encoding for the recorder to acquire code and save

    The test you mention is not appropriate as the size of the file depends on many other things not just bitrate H264 as a bunch of flags that squeeze the file but when encoding real time most of those are not active as otherwise the processing cannot keep up. Furthermore the codec is lossy so compressing it over and over again makes it smaller

    1.  If a device can keep up or not, depends on what preset you use. Even a camera could keep up with the lower end presets as they are not that intensive, but a camera for sure isn't going to be able to handle veryslow or placebo.
    2. who said compress it over and over again? I said take clip A and trans code it to clip B using CRF mode.  Then take clip A again, and run NR on it before transcoding it to clip C. Then compare clip B & C. C will be smaller, because less noise means the codec can generate the same image quality with a lower bit-rate. In other words noise has a direct effect on how good a codec like  h.264  works.
    3 hours ago, interceptor121 said:

    The camera does not apply any codec to HDMI out it simply passes the buffer before encoding for the recorder to acquire code and save


    3 hours ago, interceptor121 said:

    Noise reduction sharpening and all that comes in the picture profiles and the rest are applied before encoding which generally only makes things worst as it can transform sharpening and noise artefacts into other errors when compressing



    Picture profiles effect hdmi output, if it didn't v-log wouldn't work. The question is how much of the picture profile is applied to the hdmi output.

    As I said before, converting the internal data to be hdmi compliant has an effect on image quality. The wiki has a good overview of the actual spec. Keep in mind when you are doing transforms like this, rounding issues can cause problems as your camera isn't running at 64 or even 32 bit.



    To ensure baseline compatibility between different HDMI sources and displays (as well as backward compatibility with the electrically compatible DVI standard) all HDMI devices must implement the sRGB color space at 8 bits per component.[6](§6.2.3) Ability to use the Y′CBCR color space and higher color depths ("deep color") is optional. HDMI permits sRGB 4:4:4 chroma subsampling (8–16 bits per component), xvYCC 4:4:4 chroma subsampling (8–16 bits per component), Y′CBCR 4:4:4 chroma subsampling (8–16 bits per component), or Y′CBCR 4:2:2 chroma subsampling (8–12 bits per component). The color spaces that can be used by HDMI are ITU-R BT.601, ITU-R BT.709-5 and IEC 61966-2-4.[6](§§6.5,6.7.2)

    The gh5 is most likely working in YUV/YCbCr internally, as that's what its internally recorded files are. HDMI out is most likely 10bit sRGB to maintain maximum compatibility. Prores will support  sRGB and Y’CbCr, but I don't know what color space recorders like the atomos line are using.

    Thus you have at-least one color space transom and if you are unlucky maybe 2. You will have rounding errors, and thus image quality degradation.


    Saying internal or external recording is better than the other based on a handful of non scientific tests is idiotic. Even if you had full transparency (Intellectual property level knowledge ) of what Panasonic and Atomos was doing, something as simple as cable interference could heavily skew the results. 







  8. 1 hour ago, interceptor121 said:

    No there isn’t really

    you have optical image translated into digital signal and then compressed

    at equal optical quality the compression determines the perceived quality 


    Run this simple test with any noisy footage from any camera. 

    1. Run the footage through ffmeg and encode it to h264 or h265 in crf mode
    2. Run a de-noiser on the footage and then encode it with the exact same settings as you used above.

    You will find the de-noised clip is smaller.

    This is important imo, because I don't believe the GH series applies the same processing to hdmi out as it does to its internal recording. Also keep in mind NR isn't the only thing the camera is doing internally. The video linked above even shows signs of this.


    Also take note of the fact that even if  the camera does the exact some processing to the internal and hdmi streams, the hdmi stream still has to go through a transform process so it is complaint with the hdmi protocol. Then the recorder takes the stream and then translates it and then encodes it. So, the hdmi output goes through a lot of extra processing and we all know what that can lead to.

  9. It's better than 99% of the "technical" and "mathematical" reviews I've seen by people who are using pseudoscience. It seems like about 1 in 1000 understand their is more than bit rate, bit depth, chrome sampling, and codec involved in getting a good image.  

    It hilarious how many individuals think they more more than a large huge company.

    1 hour ago, MurtlandPhoto said:

    Not a technical comparison, but these guys saved me the time to do my own testing months ago. Haven't looked back since ?




  10. 8 hours ago, Shirozina said:

     RAID 0 currently enables fast write and read speeds and large storage capacity for less money than SSD or NVME storage. When the latter 2 drop in price significantly we can consign RAID to history......

    Everything you  said up to this point was right on tarkget. 

    The primary use case of RAID is not speed or capacity, its Redundancy. That's what the R in RAID stands for, speed and capacity are nice benefits though. Also Raid is becoming more prevalent, because as we need to store ever increasing amounts of data, the risks of loosing it to errors increases. Hence the reason why new RAID schemes that increases redundancy are always being worked on. 


  11. 1 hour ago, SR said:

    This sounds like it could be a big deal for RAID, if this new mass produced 4tb ssd is affordable. Plus, it comes with a 3-year warranty.



    Sata based SSDs are on the way out in consumer devices. M.2 SSDs are smaller and multiple times faster because they are not limited by the very slow for 2018 sata protocol. 

    depending on how computer literate you are, I would advise against a consumer grade NAS. If you have the skills its far more economical and flexible in the long run to build your own NAS running FreeNAS. 


  12. 36 minutes ago, Andrew Reid said:

    Sometimes I feel we go over the same arguments again and again for the benefit of people new to the thread or the story and who haven't done their research.

    See this is very similar to what I  mean.


    The statement reads very much like a thinly veiled insult directed at me, because you disagree with my point of view. Perceived or real that has consequences.

  13. 34 minutes ago, Andrew Reid said:

    That's the thing... A joke is not a statement. It's not even real.


    A joke isn't always a "joke" some times its a thinly veiled insult, provocation, defamation, or plain old personal attack.  Not to mention jokes can be in bad taste.

    Kathy Griffin is a perfect example. She posted a horrendous photo and caption. When the backlash hit she retracted it, and then publicly apologized  saying "I beg for your forgiveness".  Later she publicly retracted her apology, i assume because it didn't yield the result she wanted. 

    Society has had social lines in the sand since the beginning of time. Some have moved over time, some are age, race, or even ethnicity/nationality  dependent.The one constant that holds true for all of them, is that flirting with them can have consequences.










  14. 33 minutes ago, jonpais said:

    And in recent news, Olympus is expected to launch a high end video camera in 2019. But I’m afraid my m43 days are coming to an end...

    I think that will be a mistake, the g7 is super light and still excellent for video. I've shot so much video with it and the 14-42 f/3.5-5.6, 35-100 f/3.5-5.6, and 25 f/1.7.

    The hardest thing about photography and videography is avoiding the GAS pitfall!


  15. 13 hours ago, fuzzynormal said:

    What they want or are least likely to complain about?  


    9 hours ago, kye said:

    I may be nit-picking, but there might be a difference between what the customers want at home and what will make it sell on the shop floor.

    I'm going to stick with what they want!


    Imo, your average consumer is clueless when it comes the details of tv technology. For tvs all they know is HDR is the "must have" right now, and to a lot of them that just means the brighter the better. When I purchased my 4k Samsung on black Friday 2016 I had to spend about 2 hrs waiting on customer service in my local Best Buy. From all the conversations I heard, brightness, size, and "smartness" were the primary factors going into what tv someone purchased.   


  16. 4 hours ago, jonpais said:

    It took about 10 minutes to go through all the settings and turn off all the crap enhancements on my TV; about one quarter of the time it took to program my new camera

    The same here. 


    I'm not sure why some people get so bent out of shape about this. The default settings are what they are because market research shows that's what your average consumer wants. If you don't like them, then change them.

  17. 9 hours ago, IronFilm said:

    Fair enough. Although by this in point it is going to be getting VERY bulky for the whole set up! 
    You'll be making my Sony PMW-F3 look like a slim and compact set up. 

    This would only be used like a studio rig mounted on a tripod so not much of an issue. In other circumstances various comments would be stripped off.

  • Create New...