This looks very similar to video editing, where newer computers render video very quickly because they have both a powerful GPU and hardware acceleration for popular video codecs. In my tests, while the GPU is very busy, the CPU is very quiet during AI Denoise, with other background processes using more CPU than Lightroom Classic. That GPU model was released the same year as my laptop.) (I notice that D Fosse above only took 33 seconds using an RTX 3060 with 12GB VRAM. On the Windows side, I wonder if times are consistently faster using an NVIDIA GPU new enough to have their more recent AI acceleration hardware. The Neural Engine is only in Apple Silicon Mac processors, which might help explain why owners of older Intel CPU based Macs are similarly reporting much longer AI Denoise times in the minutes. And that machines with older AI acceleration hardware, or none at all, will have to take longer. My hardware list is at the end…really nothing special from a CPU/GPU point of view, so I strongly suspect it’s the Neural Engine machine learning hardware acceleration that cuts the times. I just did some tests and below is what I got. On my mid-range but recent laptop, I have so far never seen a Denoise time longer than a minute. Hardware acceleration specifically for machine learning/AI might be a missing link that helps explain why some computers can process AI features so much faster than others, and might help explain why older computers take much more time. I think the Ian Lyons post that Victoria linked to may hold part of the answer to this.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |