Jump to content

Is 4K Not as Important as We Think It Is? 3 World-Class DPs Weigh In


enny
 Share

Recommended Posts

This dream team of cinematographers is far more interested in dynamic range and color science than they are in resolution, especially when it comes to shooting dramatic narrative content. 4K and higher resolutions don't necessarily help audiences suspend their disbelief, which is (or should be) one of the primary goals for narrative filmmakers. Seeing every pore on an actor's skin can be more of a distraction for an audience than anything else.

 

 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
Guest Ebrahim Saadawi

Repost but yes it's a great watch. Definitely puts perspective. Resolution is only one aspect of image quality that's sometimes needed and sometimes not.

Link to comment
Share on other sites

This dream team of cinematographers is far more interested in dynamic range and color science than they are in resolution,...


Sorry to harp on this point, but resolution is fundamental to color science. Resolution influences color depth just as much as bit depth influences color depth.

Yes, bit depth is not the same thing as color depth -- they are decidedly different properties. Essentially, bit depth is the potential of color information in one digital pixel, whereas color depth (in digital imaging) is the total potential of color information from a given group of pixels. That "given group of pixels" involves the resolution side of the equation.

By the way, color depth also applies to analog imaging -- bit depth only applies to digital imaging.

Resolution is so crucial to color depth that a 1-bit image can exhibit an amazing range of photographic tones and colors, given enough resolution. In fact, every day we see what are essentially 1-bit images, when we look at printed magazines, newspapers and many posters and billboards.

Misconceptions abound regarding the basics of color depth and how it relates to resolution, so much so that it is doubtful that any one of these "cinematography dream team" experts is aware of these color fundamentals.
 
 

... especially when it comes to shooting dramatic narrative content. 4K and higher resolutions don't necessarily help audiences suspend their disbelief, which is (or should be) one of the primary goals for narrative filmmakers.


Agreed, and, by the same token, "Gilligan's Island" would be just as funny on 4k as it is on SD.

In addition, I certainly would rather shoot HD with a Sony F35 (or with a BMPCC and a speedbooster) than 5K with a Red Epic. 10-bit HD is enough color depth for most of my work, and the F35 (and the BMPCC) footage looks better to me than that from any of the Red cameras.
Link to comment
Share on other sites

tupp-

 

there are a lot of misconceptions is probably an understatement, especially with what we're referring to when we say "dynamic range".. because that can describe the scene, the sensor, the codec, and the bit depth of the end file format (which determines usable dynamic range).. but with respect to your 1bit example:

 

if you had a camera that could shoot infinite resolution, but could only determine a white or black pixel based on a relative scene luminance, say setting middle at 18% gray reflector, so above or below that would quantize to white or black, regardless of resolution, you would never have the details that you see in the newspaper example.. which has images that started out as photographs and then went through a line screen, essentially a dithering process, or as an artists engraving. in other words, they started with the additional dynamic range data in order to know what pixels to throw away, or in the mind's eye of an artist that was imagining that information. but, if you are capturing a wider dynamic range with the sensor and then encoding into 1bit, and dithering etc, then that would be more analogous to the newspaper example, although information is still lost that cannot be recreated by downsampling. 

 

i'm also sorry to harp on this point.. 

Link to comment
Share on other sites

there are a lot of misconceptions is probably an understatement, especially with what we're referring to when we say "dynamic range".. because that can describe the scene, the sensor, the codec, and the bit depth of the end file format (which determines usable dynamic range).. but with respect to your 1bit example:


Bit depth and dynamic range are two completely independent and unrelated properties.

One can map various bit depth ranges to any section of the amplitude range. Dynamic range refers to the "usable" section of the amplitude range.

 

if you had a 1bit camera (or maybe better to say a sensor with a dynamic range of 1) that could shoot infinite resolution, it would still need to look at the overall relative scene luminance to determine what it would encode as white or black. in a perfect world scenario, and using 18% gray as middle, anything above an 18% gray reflector would be white.. anything below would be black, there would be no additional tones above or below that, regardless of resolution, so you would never have the details that you see in the newspaper


That's not how a 1-bit camera would be configured. With such a low bit-rate camera, one would have to create pixel groups with a range of sensitivities among the pixels in each group.

There are four basic ways to vary the sensitivities of adjacent pixels: 1. electronically; 2. with optical filtration (similar to that of RGB filters on pixel groups, but with ND instead); 3. with different sized apertures (or obstructions) in front of each pixel in the group; 4. with various sized pixels.

I think that some early digital imaging experimenters tried an electronic cascading technique with CCDs to get a greater range of tones, and Magic Lantern's "Dual ISO" feature can change pixel sensitivity on a per-line basis. Also, I am fairly sure that Panavision was using either varied pixel NDs or varied pixel apertures with their original Dynamax HDR sensor -- a true example of getting greater tonal range by increasing resolution with a fixed bit depth. In addition, Fuji is currently using various pixel sizes on their X-Tran sensor (for moire/aliasing elimination -- not for increased tonal range).

 

example.. which has images that started out as photographs and then went through a line screen, essentially a dithering process, or as an artists engraving. in other words, they started with the additional dynamic range data in order to know what pixels to throw away, or in the mind's eye of an artist that was imagining that information.


I am not sure if screen capture/printing is considered to be an actual dithering process.

 

i'm also sorry to harp on this point..


Huh? How are you "harping?"
Link to comment
Share on other sites

Bit depth and dynamic range are two completely independent and unrelated properties.

 

You and I have kinda discussed this in the past.. I think because we are coming at this from different backgrounds, you think that I'm saying that dynamic range is the same as bit depth.

 

more below on this --

 

That's not how a 1-bit camera would be configured. With such a low bit-rate camera, one would have to create pixel groups with a range of sensitivities among the pixels in each group, and then sum the results of each group.

 

so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc.. in which case you would be leaving out too much info really to cover. but even summing the results of each pixel group with infinite resolution would not account for the loss of information. 

 

I am not sure if screen capture/printing is considered to be an actual dithering process.

 

in order to get that (you called it 1bit) black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution and generally 8bit image.. if it's a newspaper like the LA Times for example, you need to convert to grayscale, increase contrast, possibly add unsharp mask and print to a 100 line per inch screen velox (that's probably really dated info). anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution. this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement. 

 

"usable dynamic range and bit depth"

 

you have argued that bit depth and dynamic range are not related, but i noticed in one of your other posts you asked why the banding in a blue sky was so bad with the A7s and wanted to know if there was a solution. an 8bit file is considered low dynamic range for this (and other) reasons. you are trying to encode too much contrast variation into too few code values, all noise and dither strategies aside. if you only had a narrow range of light intensity or gamma, particularly in the shadows, no compression etc, it would be less noticeable, or higher bit depth encoding. image formats below 16bit are often referred to as low dynamic range in a render pipeline.. even 10bit technically is considered LDR. for this, the bit depth and format are both important.. i.e. integer, float or 1/2 float (exr).

 

from Wikipedia on OpenEXR:

 

It is notable for supporting 16-bit-per-channel floating point values (half precision), with a sign bit, five bits of exponent, and a ten-bitsignificand. This allows a dynamic range of over thirty stops of exposure.

 

another context for the term.. and obviously bit depth is important with respect to the dynamic range potential. 

Link to comment
Share on other sites

so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc..


Okay.

 

in which case you would be leaving out too much info really to cover.


What is the meaning of this statement? Given enough resolution and barring noise, a 1-bit camera could certainly provide the same amount of color info as an Alexa 65.

 

but even summing the results of each pixel group with infinite resolution would not account for the loss of information.


Forget the summing -- summing implies that we are using more than 1-bit in our system (after the summing), and summing is completely unnecessary. I edited-out the summing step just before your response.

In regards to the "loss of info" -- to what are you referring? Given infinite resolution and barring any other practical problems (such as noise), our 1-bit camera could certainly have more color info than an Alexa 65.

 

in order to get that (you called it 1bit) black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution


The subject/scene that you are shooting with the 1-bit camera has resolution down to the atomic level.

 

and generally 8bit image..


One could start with an analog image that has 0-bit depth.

Regardless of the original bit depth and resolution (or color depth), the image is rendered within the color depth limits of the printing system.

 

if it's a newspaper like the LA Times for example, you need to convert to grayscale, increase contrast, possibly add unsharp mask and print to a 100 line per inch screen velox (that's probably really dated info).


Thanks for reminding me of Velox.

 

anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution.


Not necessarily -- not if the printing system is higher quality than that of the original image.

If the original image was shot on Royal-X and the printing screen is 10x finer than the Royal-X grains, there could be is no essential loss in quality -- even though the final image is not exactly the same as the original.

The same principle applies to transcoding video.

 

this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement.


I fail to see how the fundamental relationship between resolution, bit depth and color depth causes misunderstanding, since most are completely ignorant of that actual relationship.

 

"usable dynamic range and bit depth"

you have argued that bit depth and dynamic range are not related,


Dynamic range and bit depth are two completely different and independent properties.

By the way, dynamic range also exists in analog systems, but bit depth exists ONLY in digital systems
Link to comment
Share on other sites

but i noticed in one of your other posts you asked why the banding in a blue sky was so bad with the A7s and wanted to know if there was a solution.


No. >I started a thread for members to post on how they dealt with the banding in the A7s. I already knew why banding is prevalent in the A7s -- it's 8-bit.

 

an 8bit file is considered low dynamic range for this (and other) reasons.


Not by anyone who knows the difference between bit depth and dynamic range.

Bit depth and dynamic range are two completely independent properties.

 

you are trying to encode too much contrast variation into too few code values,


Not necessarily. Banding is the phenomenon of a value threshold transition that falls within a broad smooth, tonal gradation. It is more of a result of bit depth and the smoothness of tonal transitions in the subject.

Other 8-bit cameras exhibit the same (or more) banding as the A7s, and most of those other cameras have less dynamic range.

For instance, the A7s has a large capture dynamic range with value thresholds that are spaced far apart in amplitude. So, the A7s might have one value threshold that falls within in a sky that ranges, say, 1/16th of a stop, while an 8-bit camera with less dynamic range might have 3 value thresholds that fall within the same sky. In such a scenario, the A7s has two bands in the sky, while the other 8-bit camera has four bands in the sky (and probably more contrast!).

 

if you only had a narrow range of light intensity or gamma, particularly in the shadows, no compression etc, it would be less noticeable, or higher bit depth encoding.


The smoother the gradation of intensity, the greater the chance of banding.

Yes, higher bit depth can make banding less noticeable.

 

image formats below 16bit are often referred to as low dynamic range in a render pipeline..


Not by those who know the difference between bit depth and dynamic range.

Bit depth and dynamic range are two completely independent properties.

 

even 10bit technically is considered LDR. for this, the bit depth and format are both important.. i.e. integer, float or 1/2 float (exr).


Bit depth and dynamic range are two completely independent properties.

 

from Wikipedia on OpenEXR:
It is notable for supporting 16-bit-per-channel floating point values (half precision), with a sign bit, five bits of exponent, and a ten-bitsignificand. This allows a dynamic range of over thirty stops of exposure.


What can one say, except don't believe everything that you read on Wikipedia.

The fact is that a 4-bit digital system can have over thirty stops of dynamic range, while a 16-bit system can have only one "stop" of dynamic range.

Furthermore, analog systems (which have zero bit depth) can have a dynamic range of over thirty "stops."

Put simply, bit depth is the potential number of value increments within a digital system. Dynamic range is the ratio between the largest and smallest values of usable amplitude, in both analog and digital systems.
Link to comment
Share on other sites

tupp-

 

wow, responding to almost every sentence, even the ones you don't have a problem with! well, since i hope to eat dinner sometime tonight, i'll just address some of the highlights:

 

i said,

 

"so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc.. in which case you would be leaving out too much info really to cover."

 

and you respond,

 

"What is the meaning of this statement? Given enough resolution and barring noise, a 1-bit camera could certainly provide the same amount of color info as an Alexa 65."

 

you misunderstood. i meant if you start to take a simple thought experiment and literally start describing algos and pixel blocking techniques, you introduce too many variables for the thought experiment to be useful. 

 

i said,

 

"in order to get that ... black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution and generally 8bit image."

 

you said,

 

"One could start with an analog image that has 0-bit depth. Regardless of the original bit depth and resolution (or color depth), the image is rendered within the color depth limits of the printing system."

 

obviously the printer will limit the printed color depth, what i was saying is that you have to start with a high resolution (and color depth) image to create a low resolution monotone image with fine details. anyone that has made an indexed gif or line-art understands this principle.

 

i say, 

 

"anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution."

 

and you say,

 

"Not necessarily -- not if the printing system is higher quality than that of the original image... there could be is no essential loss in quality -- even though the final image is not exactly the same as the original.

 

then it's not the original image, it's something else. 

 

i said, 

 

" this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement. "

 

you said,

 

"I fail to see how the fundamental relationship between resolution, bit depth and color depth causes misunderstanding, since most are completely ignorant of that actual relationship."

 

it's an oversimplification.. i'll just refer you to the posts and discussions about generating "true 10bit 2k 4:4:4 from 8bit 4k 4:2:0" for some of the confusion.

 

i say, 

 

"an 8bit file is considered low dynamic range for this (and other) reasons."

 

you respond, 

 

"Not by those who know the difference between bit depth and dynamic range."

 

in this context, dynamic range is referring to the ability to record light info w/o visible steps. we understand that it's not referring to an empirical definition of dynamic range but rather a low or high usable potential.

 

a couple more -- 

 

you are very adamant that dynamic range and bit depth are "two different properties", even though i keep stating that i'm not saying they are the same, but nevertheless:

 

you said,

 

"Dynamic range and bit depth are two completely different and independent properties. By the way, dynamic range also exists in analog systems, but bit depth exists ONLY in digital systems "

 

so this is understood by anybody with a little EE and CS knowledge.. but more importantly, i keep stating that what i'm referring to is "usable dynamic range". there is no point in encoding a gazillion stops of dynamic range into a 2bit file format. you can't use it. extensive tests have been done on the human visual system and requirements for minimum unnoticeable image contrast steps, refer to JND or just-noticeable difference with respect to contrast perception.

 

so again, this is not saying dynamic range = bit depth, this is saying that usable dynamic range encoded into a digital file can be limited by too few bits per pixel. and out of that reality, terms like HDR or LDR workflows have emerged. 

 

and lastly, you say, 

 

"don't believe everything that you read on Wikipedia. The fact is that a 4-bit digital system can have over thirty stops of dynamic range, while a 16-bit system can have only one "stop" of dynamic range."

 

refer to the above point about encoding over 30 stops into 16 colors. anyway, you aren't just saying don't believe Wikipedia about the term "dynamic range" with respect to image formats, you're also saying don't believe the Academy (see ACEs), ILM (see EXR), ImageWorks (see OpenColorIO), UC Berkeley (Paul DeBevec), radiance, the hdr file extension should be renamed, and this link on Wikipedia titled "High Dynamic Range File Formats" should be taken down... and more. Anyway, i gotta stretch now.. :( thanks for the distraction from actual work. 

Link to comment
Share on other sites

Going back to the op and the title of the post - I think 4k will be important to you if it's going to bring you a benefit.

 

These guys are all working in specific genres and within that they work within specific constraints.  Stuff like maybe having big budgets, lots of crew, loads of experience, expensive and powerful talent etc etc.  The reasons they gave about not wanting or needing 4k don't necessarily apply to somebody else (e.g. me and the stuff I shoot).  

 

I can see that 4k might be very useful to me - especially things like being able to crop an HD frame out of a 4k frame and simulate pans and zooms etc.  Working with a very small crew (mostly just me) being able to introduce those elements into shots without needing to handle the camera during the shoot could be a huge positive.  For those guys they maybe don't see the benefit of that as they can just say 'get an extra camera crew in and shoot the pan for real'.  

 

Maybe 4k is initially going to be another creative tool that low budget shooters can get the most out of.

Link to comment
Share on other sites

Well this should be an interesting topic, but kind of got killed off by all the geeky stuff some of us don't bother to know  :D

 

I shoot with my GH3 still, and I regularly shoot my stuff on Canon FD lenses (more organic, not too sharp). Some clients are still telling me to soften up the image as the resolution is too high, too clean, "too HD." A client asked me today "the quality is too high" - and this was for a teenage pop video. 

 

Those DP's are spot on in my opinion. I personally like 4k but only as a post production tool to crop, reframe, stabilise and move images. Most of the 4k on Youtube looks awful on my monitor (very brittle, moire galore). On a TV it reminds me of The Hobbit in 48fps - hyper-real. Hyper-real is on the majority not the best way to do things. 

 

4k is an exciting format - but dynamic range and colour beat it to shreds.  :ph34r:

Link to comment
Share on other sites

Well this should be an interesting topic, but kind of got killed off by all the geeky stuff some of us don't bother to know  :D

 

4k is an exciting format - but dynamic range and colour beat it to shreds.  :ph34r:

 

Yes, 4K is an interesting topic in general, and so is the video, whether you agree with the guys in it or not. Unfortunately the nerds clashing their light sabres are making so much racket it's hard to hear the more interesting and the more practical arguments for and against the talking points provided in the video.

 

Speaking of which, didn't we have at least one, if not two previous threads about the same video already, at least one with less noise & nerdytainment? Maybe the admins might want to lock this one and redirect it to the previous one. 

Link to comment
Share on other sites

Going back to the op and the title of the post - I think 4k will be important to you if it's going to bring you a benefit.

 

These guys are all working in specific genres and within that they work within specific constraints.  Stuff like maybe having big budgets, lots of crew, loads of experience, expensive and powerful talent etc etc.  The reasons they gave about not wanting or needing 4k don't necessarily apply to somebody else (e.g. me and the stuff I shoot).  

 

 

 

I think that is the best explanation so far.  The other thing to take into consideration is the quality of the 1080p.  I'm sure the detail they can pull from their 1080p is a little bit more than the detail I can pull from my bmpcc... And obviously my T3i is a joke for anything other than closups of people's faces.  If I could shoot 1080p on an Arri Alexa I wouldn't trade it for 4K from a GH4... and for that matter neither would anyone else on this forum.

 

The problem is we are not getting 1080p worth of information out of a bmpcc.  And the bmpcc is arguably the most detailed image till you get to... 4K in the GH4!

 

The punching in and simulated dolly stuff is great with 4k.  But just getting true 1080p from down sampled 4k is boon.

 

Those DPs were talking amoungst themselves and to the high end $30+ million budget movie people.  But to someone like me shooting vaction movies we need more detail.

 

 



Those DP's are spot on in my opinion. I personally like 4k but only as a post production tool to crop, reframe, stabilise and move images. Most of the 4k on Youtube looks awful on my monitor (very brittle, moire galore). On a TV it reminds me of The Hobbit in 48fps - hyper-real. Hyper-real is on the majority not the best way to do things. 

 

I doubt most people that saw the Hobbit thought it looked "hyper-real."

 

Frankly I think things like the sweeping New Zealand vistas in the Hobbit benefit from things like 4k and higher frame rates.  One of my biggest pet peeves is how in a lot of movies they open up with those sweeping aerial shots and the trees or ground in the foreground is just a mushy stuttery blur because of the frame rate.  It doesn't bother most people but to say there is no room for impovement is a bit much.  Or at least to say improving it degrades the movie is a little over the top.

Link to comment
Share on other sites

Well this should be an interesting topic, but kind of got killed off by all the geeky stuff some of us don't bother to know


Sorry. Didn't mean to hijack the thread. Just trying to give a friendly reminder that resolution is integral to color depth.

On the other hand, there certainly is nothing "geeky" about a thread titled, "Is 4K Not as Important as We Think It Is?" :)

 

Those DP's are spot on in my opinion. I personally like 4k but only as a post production tool to crop, reframe, stabilise and move images.


I don't think that there is anything inherently wrong with 4K (or with any other given resolution).

Of course, the higher the resolution, the greater the bandwidth/resource requirements (all other variables remaining the same).

In addition, those DPs stressed lighting over technical specs (including dynamic range and "color science").

 

Most of the 4k on Youtube looks awful on my monitor (very brittle, moire galore).


YouTube looks awful on a lot of monitors.

However, the moire problem could be peculiar to your set-up. What is the resolution of your monitor?

 

On a TV it reminds me of The Hobbit in 48fps - hyper-real. Hyper-real is on the majority not the best way to do things.


That's an interesting topic.

Just the other day, I was talking to my Indian filmmaker friend, and he explained that the reason why a lot of Indian films were completely looped (other than practical shooting advantages) was that Indian audiences like the "other-worldly" feel of overdubbed dialog.

 

4k is an exciting format - but dynamic range and colour beat it to shreds.


I don't think that 4K is particularly "exciting." 4K cinema cameras have been around since the first Dalsa Origin (2002?). 4K is yet another step in the cinematography megapixel race -- a conspicuous technical spec for producers (and others) to demand.

In regards to the notion that dynamic range and color are more important than resolution, again, resolution is integral to color depth.

However, a system with 10-bit depth and at least HD resolution would certainly suffice for a lot of the work done today. As I said earlier in the thread, I would rather shoot HD with a Sony F35 than 5K with a RED Epic.

As those DPs suggested, lighting usually trumps all of the technical specs.
Link to comment
Share on other sites

The other thing to take into consideration is the quality of the 1080p. I'm sure the detail they can pull from their 1080p is a little bit more than the detail I can pull from my bmpcc...


If you are suggesting that converting to 4K to 1080 gives more detail than just capturing at 1080, I think the jury is still out on that issue. There are other variables involved, such as how precise one can focus in 4K compared to in 1080, and such as how sharpening algorithms in 4K affect the results after conversion to 1080.

On the other hand, one can certainly retain more color depth when converting from 4k to 1080, all other variables remaining the same.

 

And obviously my T3i is a joke for anything other than closups of people's faces.


Not at all.

Aside from this thread's point that lighting and content are probably more important than resolution, the T3i/600D is one of the few cameras that can take advantage of Tragic Lantern's GOP 1 capability, which eliminates the blockiness from interframe H264 compression.

To maximize TL's special H264 controls:
1. disable the camera audio;
2. get a fast SDHC/XC card;
3. install TL;
4. boost your bit rate to at least 2x;
5. use a flat picture style with the sharpness set to 1 (sharpen later in post);
6. and set your GOP to 1.

You might be pleasantly surprised by the results.

 

If I could shoot 1080p on an Arri Alexa I wouldn't trade it for 4K from a GH4... and for that matter neither would anyone else on this forum.


For 1080, I wouldn't necessarily choose an Alexa over a Sony F35.

 

The problem is we are not getting 1080p worth of information out of a bmpcc.


Not sure what is meant by this statement.

The BMPCC gives an enormous amount of information in raw. I just shot a feature using two BMPCCs with speedboosters, and we captured about 8 terabytes of raw, 12-bit footage.

 

And the bmpcc is arguably the most detailed image till you get to... 4K in the GH4!


Generally, I would rather shoot 1080 with a Sony F35 than a BMPCC.

Also, there are other cameras that shoot resolutions between 1080 and 4K.

Furthermore, other than the GH4, there are cameras that shoot raw cinema 4K (and greater).

 

Those DPs were talking amoungst themselves and to the high end $30+ million budget movie people.


They spoke of the prevalence of down-scaling -- how gaffers complained about the trend of reduction in power of the lighting package, due to greater camera sensitivity. The gaffers can no longer skim as many of the big 10-ton truck and 1500-amp genny kit-rentals.

Now, it is more common to use 5-ton LED/fluo-heavy packages with 500-amp (or less) gennies.

Link to comment
Share on other sites

Generally, I would rather shoot 1080 with a Sony F35 than a BMPCC.

Also, there are other cameras that shoot resolutions between 1080 and 4K.

Furthermore, other than the GH4, there are cameras that shoot raw cinema 4K (and greater).


You need to go back and read my post. The whole point is when you are on a budget like say $500 for a camera (bmpcc) or less than $2,000 for a camera (GH4) you have to scrimp and do some tricks and work arounds. I will repeat, if I could work with an Arri Alexa at 1080p neither I nor anyone else would trade it for the 4k of a GH4. You have to use common sense and look at the tools available to people on a particular budget. These DPs working on $30+million movies with multiple Alexas scattered around are not addressing the fact that you get more detail with a GH4 than with a BMPCC.
 

If you are suggesting that converting to 4K to 1080 gives more detail than just capturing at 1080, I think the jury is still out on that issue.


Watch some BMPCC vs GH4 review videos. My jury is my eyes and they are in with a verdict. Didn't even realize this was controversial.
 

There are other variables involved, such as how precise one can focus in 4K compared to in 1080, and such as how sharpening algorithms in 4K affect the results after conversion to 1080.


I've shot medium format film since well before someone even thought up 4k or a bmpcc. I can focus a lens. I'm also not one of those people that is enamored with shooting everything at f/1.4. As I stated in the answer you are quoting the place where you really see 4k shine is in those sweeping lanscape shots. The T3i just falls to pieces on that kind of material. And guess what kind of shot appears a lot in vaction videos? Guess what aperture you shoot those landscapes at. Guess how far back you are standing from those beautiful vistas. I mean really if you can't get a landscape in focus on a m43 sensor with a wide angle lens and the apeture stopped down... I don't know what to say.

For the record I own a bmpcc. It's the best video camera I own. But that doesn't mean I'm blind to its shortcomings.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...