Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Reputation Activity

  1. Like
    KnightsFan got a reaction from IronFilm in Don't panic about AI - it's just a tool   
    Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money.
    The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives.
    The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations.
    My third point is that I disagree that people are fundamentally moved by other people's creations. You write
    I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more.
    Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal.
    My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
  2. Thanks
    KnightsFan got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it.
    Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
  3. Like
    KnightsFan got a reaction from kaylee in How much bitrate do I actually need?   
    This. Do your own tests and trust your judgement, but here's my opinion.
     
    If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld.
    Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  4. Like
    KnightsFan got a reaction from Juank in How much bitrate do I actually need?   
    This. Do your own tests and trust your judgement, but here's my opinion.
     
    If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld.
    Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  5. Like
    KnightsFan got a reaction from Amazeballs in How much bitrate do I actually need?   
    This. Do your own tests and trust your judgement, but here's my opinion.
     
    If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld.
    Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  6. Like
    KnightsFan got a reaction from webrunner5 in How much bitrate do I actually need?   
    This. Do your own tests and trust your judgement, but here's my opinion.
     
    If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld.
    Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  7. Like
    KnightsFan got a reaction from Emanuel in How much bitrate do I actually need?   
    This. Do your own tests and trust your judgement, but here's my opinion.
     
    If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld.
    Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  8. Haha
    KnightsFan got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets!
    If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.
  9. Thanks
    KnightsFan got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time.
    And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  10. Like
    KnightsFan got a reaction from tupp in RED Files Lawsuit Against Nikon   
    I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time.
    And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  11. Like
    KnightsFan got a reaction from MurtlandPhoto in RED Files Lawsuit Against Nikon   
    I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time.
    And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  12. Haha
    KnightsFan got a reaction from ntblowz in Canon RF 5.2mm f/2.8L Dual Fisheye 3D VR Lens   
    3D porn is last decade, we're way beyond that haha
  13. Like
    KnightsFan got a reaction from webrunner5 in Canon RF 5.2mm f/2.8L Dual Fisheye 3D VR Lens   
    I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic.
    Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  14. Like
    KnightsFan got a reaction from majoraxis in Unreal and python background remover   
    The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting
    Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now.
    Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  15. Like
    KnightsFan reacted to Gianluca in Unreal and python background remover   
    Hello everyone ... I preferred to open a new topic rather than continue to abuse "my Journey to virtual production"
    With the carnival I was able to do some more tests with unreal and this time for I recorded my subject up against a yellow wall ... As you can see, in controlled working conditions, the results can be really good ... Obviously there is still a lot of room for improvement, for example I have to synchronize the two video tracks by recording a common audio track, I have to balance the gimbal better (I have a crane-m and with the mobile phone mounted I exceed the grams it can support, so it vibrates a lot) , but apart from that, if I had an actress who does something sensible apart from shooting all the time 🙂 I might at this point think I can shoot something decent ... What do you think? Opinions, advice?
     
  16. Like
    KnightsFan got a reaction from majoraxis in My Journey To Virtual Production   
    I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control.
    I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  17. Like
    KnightsFan got a reaction from BTM_Pix in My Journey To Virtual Production   
    I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control.
    I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  18. Like
    KnightsFan reacted to BTM_Pix in My Journey To Virtual Production   
    So....getting into the time machine and going back to where the journey was a few weeks ago.
    To recap, my interest in these initial steps is in exploring methods of externally controlling the virtual camera inside Unreal Engine rather than getting live video into it to place in the virtual world.
    To me, that is the bigger challenge as, although far from trivial, the live video and keying aspect is a known entity to me and the virtual camera control is the glue that will tie it together.
    For that, we need to get the data from the outside real world in terms of the camera position, orientation and image settings in to Unreal Engine and then do the processing to act upon that and translate it to the virtual camera.
    To achieve this, Unreal Engine offers Plug Ins to get the data in and Blueprints to process it.
    The Plug Ins on offer for different types of data for Unreal Engine are numerous and range from those covering positional data from VR trackers to various protocols to handle lens position encoders, translators for PTZ camera systems and more generic protocols such as DMX and MIDI.
    Although it is obviously most closely associated with music, it is this final one that I decided to use.
    The first commercially available MIDI synth was the Sequential Prophet 600 which I bought (and still have!) a couple of years after it was released in 1982 and in the intervening four decades (yikes!) I have used MIDI for numerous projects outside of just music so its not only a protocol that I'm very familiar with but it also offers a couple of advantages for this experimentation.
    The first that, due to its age, it is a simple well documented protocol to work with and the second is that due to its ubiquity there are tons of cheaply available control surfaces that greatly help when you are prodding about.
    And I also happen to have quite a few lying around including these two, the Novation LaunchControl and the Behringer X-Touch Mini.


    The process here, then, is to connect either of these control surfaces to the computer running Unreal Engine and use the MIDI plug in to link their various rotary controls and/or switches to the operation of the virtual camera.
    Which brings us now to using Blueprints.
    In very simple terms, Blueprints allow the linking of elements within the virtual world to events both inside it and external.
    So, for example, I could create a Blueprint that when I pressed the letter 'L' on the keyboard could toggle the light on and off in a scene or if I pressed a key from 1 to 9 it could vary its intensity etc.
    For my purpose, I will be creating a Blueprint that effectively says "when you receive a value from pot number 6 of the LaunchControl then change the focal length on the virtual camera to match that value" and "when you receive a value from pot number 7 of the LaunchControl then change the focus distance on the virtual camera to match that value" and so on and so on.
    By defining those actions for each element of the virtual camera, we will then be able to operate all of its functions externally from the control surface in an intuitive manner.
    Blueprints are created using a visual scripting language comprising of nodes that are connected together that is accessible but contains enough options and full access to the internals to create deep functionality if required.
    This is the overview of the Blueprint that I created for our camera control.

    Essentially, what is happening here is that I have created a number of functions within the main Blueprint which are triggered by different events.
    The first function that is triggered on startup is this Find All MIDI Devices function that finds our MIDI controller and shown in expanded form here.

    This function cycles through the attached MIDI controllers and lists them on the screen and then steps on to find our specific device that we have defined in another function.
    Once the MIDI device is found, the main Blueprint then processes incoming MIDI messages and conditionally distributes them to the various different functions that I have created for processing them and controlling the virtual camera such as this one for controlling the roll position of the camera.

    When the Blueprint is running, the values of the parameters of the virtual camera are then controlled by the MIDI controller in real time.
    This image shows the viewport of the virtual camera in the editor with its values changed by the MIDI controller.

    So far so good in theory but the MIDI protocol does present us with some challenges but also some opportunities.
    The first challenge is that the value for most "regular" parameters such as note on, control change and so on that you would use as a vehicle to transmit data only ranges from 0 to 127 and doesn't support fractional or negative numbers.
    If you look at the pots on the LaunchControl, you will see that they are regular pots with finite travel so if you rotate it all the way to the left it will output 0 and will progressively output a higher value until it reaches 127 when it is rotated all the way to the right.
    As many of the parameters that we will be controlling have much greater ranges (such as focus position) or have fractional values (such as f stop) then another solution is required.
    If you look at the pots on the X-Touch Mini, these are digital encoders so have infinite range as they only interpret relative movement rather than absolute values like the pots on the LaunchControl do.
    This enables them to take advantage of being mapped to MIDI events such as Pitch Bend which can have 16,384 values.
    If we use the X-Touch Mini, then, we can create the Blueprint in such a way that it either increments or decrements a value by a certain amount rather than absolute values, which takes care of parameters with much greater range than 0 to 127, those which have negative values and those which have fractional values.
    Essentially we are using the encoders or switches on the X-Touch Mini has over engineered plus and minus buttons with the extent of each increment/decrement set inside the Blueprint.
    Workable but not ideal and it also makes it trickier to determine a step size for making large stepped changes (think ISO100 to 200 to 400 to 640 is only four steps on your camera but a lot more if you have to go in increments of say 20) and of course every focus change would be a looooongg pull.
    There is also the aspect that, whilst perfectly fine for initial development, these two MIDI controllers are not only wired but also pretty big.
    As our longer term goal is for something to integrate with a camera, the requirement would quickly be for something far more compact, self powered and wireless.
    So, the solution I came up with was to lash together a development board and a rotary encoder and write some firmware that would make it a much smaller MIDI controller that would transmit to Unreal Engine over wireless Bluetooth MIDI.

    Not pretty but functional enough.
    To change through the different parameters to control, you just press the encoder in and it then changes to control that one.
    If that seems like less than real time in terms of camera control as you have to switch, then you are right it is but this form is not its end goal so the encoder is mainly for debugging and testing.
    The purpose of how such a module will fit in is in how a camera will talk to it and it then processing that data for onward transmission to Unreal Engine. So its a just an interface platform for other prototypes.
    In concert with the Blueprint, as we now control the format of the data being transmitted and how it is then interpreted, it gives us all the options we need to encode it and decode it so that we can fit what we need within its constraints and easily make the fractional numbers, negative numbers, large numbers that we will need.
    It also offers the capability of different degrees of incremental control as well as direct.
    But that's for another day.
    Meanwhile, here is a quick video of the output of it controlling the virtual camera in coarse incremental mode.
     
  19. Like
    KnightsFan reacted to Gianluca in My Journey To Virtual Production   
    While I am learning to import worlds into unreal, learning to use the sequencer etc etc, I am also doing some tests, which I would define extreme, to understand how far I can go ...
    I would define this test as a failure, but I have already seen that with fairly homogeneous backgrounds, with the subject close enough and in focus (here I used a manual focus lens), if I do not have to frame the feet (in these cases I will use the tripod) it is possible to catapult the subject in the virtual world quite easily and in a realistic way ..
    The secret is to shoot many small scenes with the values to be fed to the script optimized for that particular scene ..
    The next fly I'll try to post something made better ..
     
     
  20. Like
    KnightsFan reacted to Gianluca in My Journey To Virtual Production   
    Sorry if I often post my tests but honestly I've never seen anything like it, and for a month I also paid 15 euros for runwayml which in comparison is unwatchable ..
    We are not yet at professional levels probably, but by retouching the mask with fusion a little bit in my opinion we are almost there ... I am really amazed by what you can do...
     
     
  21. Like
    KnightsFan reacted to Gianluca in My Journey To Virtual Production   
    From the same developers of the python software that allows you to have a green screen (and a matte) from a shot with a tripod and a background without a subject, I found this other one, it's called RVM, and it allows you to do the same thing even with shots moving and starting directly with the subject in the frame .. For static shots the former is still much superior, but this has its advantages when used in the best conditions. With a well-lit subject and a solid background, it can replace a green background
     
    Or with a green or blue background you can have your perfect subject even if it just comes out of the background or even if the clothes she is wearing have green or blue
    Now I am really ready to be able to shoot small scenes in unreal engine and I can also move slightly within the world itself thanks to the virtual plugin and these phyton tools
    The problem is that the actor has no intention of collaborating ..
    ..
  22. Like
    KnightsFan got a reaction from Mmmbeats in Prores is irrelevant, and also spectacular!   
    This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate.
    As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention.
    Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size.
    As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  23. Like
    KnightsFan got a reaction from BTM_Pix in Zoom F3 - Compact 2 channel 32 bit float audio recorder   
    Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  24. Thanks
    KnightsFan reacted to BTM_Pix in Zoom F3 - Compact 2 channel 32 bit float audio recorder   
    Mmmm....
    So I've been looking further into this today and its actually a complete mess.
    And not just because of the app.
    First off, I should have been clearer in the original post regarding the requirement to use an UltraSync Blue unit (€150 approx) for syncing multiple Zoom, Atomos and iOS apps over BLE.
    Which brings us to an issue...
    The Zoom units with BLE functionality can use the connectivity for the app or for wireless timecode but not both simultaneously.
    This is a menu switched option (such as on my H3-VR and the new F3) but the F2-BT doesn't have an obvious mechanism to change this on the fly as it has neither a screen nor a switch to do it with.
    Unbelievably, Zoom's solution to this lack of UI on the the unit is that you have to connect it via USB to a Mac or a PC running Zoom's F2 Editor application to switch its mode.
    The F2BT unit has five physical buttons on it that Zoom could have used to control that switch on boot up (i.e. hold down Stop button when switching on for Control and hold down Play when booting for timecode mode) but instead have opted for an utterly ridiculous and clunky solution.
    So dynamically changing it on the hoof is completely impractical.
    In terms of control from the mobile app, it is inexplicable why it can't control more than one device at a time.
    I've had a bit of a hack around interrogating the H3VR this morning and I can see that there are enough BLE Services and Characteristics to make it happen to address units individually within the limits of how many simultaneous BLE devices can be connected.
    A more simplified app option for multi units that did basic rec start/stop and signal present leds rather than full metering and settings changes etc would be perfectly doable and adequate for at least four units.
    For now, in terms of rec start/stop its doable across multiple units if you close the app and re-open it to choose a different unit on the initial scan but thats clunky as hell too.
    So, as it stands, to do two timecode synced F2BTs and the F3 they would have to all be in timecode mode (and you'd have to have set the F2s up beforehand) and started manually because there is also no mechanism to do a record start/stop command for the whole group.
    OK, so considering that the F2BT has a run time of about 15 hours on batteries and an entire recording for that period would take up a fairly trivial (compared to video files) of about 8gb then the solution would be to sync it to the UltraSync Blue, put it in record and use the lock mechanism to keep it there and then pressing Record on the F3 and leaving it running all day.
    But you'd have to be careful with that too as what this guy shows in his video review is that it is easily possible to think the F2 is in record when it isn't due to the enormous delay after pressing the Rec button and it actually beginning the recording.
    In essence, I don't think any of this is insurmountable and can be filed under first world problems considering the amount of scope on offer here by having a couple of F2-BTs, an F3 and an UltraSync Blue but its infuriating that Zoom seem to have had such a massive failure in joined up thinking on how it hangs together as a full system.
    The easiest solution to fix this would be to have an option in the Zoom devices to start/stop recording on receipt/suspension of Bluetooth timecode and use the UltraSync Blue as master controller by using its start and stop run function.
    Maddening how short sighted Zoom have been here.
  25. Like
    KnightsFan got a reaction from BTM_Pix in Zoom F3 - Compact 2 channel 32 bit float audio recorder   
    I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
×
×
  • Create New...