![]() 1GB) then OIDN works on sections of the image in 1GB parts. The way it works is from my understanding is you set a RAM limit (E.G. I was also informed by someone a while ago that OIDN actually has a RAM limit that can be activated. But no longer rely on these tiles as a mechanism for work distribution between devices.2K tiles to support very high res renders In the technical presentation for developers, the aim is to reduce memory usage for high resolution renders by rendering the scene in tiles of a “2k” size. I hope this has a solution with the new method and out of memory problems are not a problem. This (small tiles) was important if you were rendering large output resolutions so as not to exceed the system RAM for OIDN compared to using denoising in compositor. Prepare for rendering algorithms that require progressive passesĪlso with small tiles and denoising the memory usage is lower than full frame render, at least for OIDN.Technical presentation for developers link: In slide 14 of the technical presentation for developers they talk about the benefits of using progressive rendering. Multi-device rendering: we’ll experiment with more fine-grained load balancing without tilesĪnother thing, it seems like the goal is to move away from tiled rendering. Keep in mind that the Cycles-X branch of Blender is still in relatively early development. This will hopefully help in resolving the performance issues you see. In the Cycles-X announcement ( ) it is talked about how they wish to better optimize multi-device rendering (Multi-GPU and GPU+CPU) so it offers better load balancing (and presumably performance) without the use of tiles. ![]() I also agree that NLM and tiled rendering should come back…Ībout tiled/GPU+CPU rendering, I actually noticed a significant slowdown in render times (about 2x slower), since I’ve always rendered my scenes using both, so I really hope they re-implement tiling or figure out a way to use the CPU as well, otherwise I won’t be able to benefit from the architectural speed-up at all. I hope one day AI really behaves like other AIs like image colorization AIs) ![]() (Its really interesting that AI cant guess details, normally other AI can understand details and create even new details if they see patterns but new AI denoisers seem like non-AI more like marketting name. So we should still have an option like NLM so that it can be used for such scenes too. They even look better than NLMīut yes, like the sample images posted for comparison above, yes NLM is the best for such renders having so little patterns and fabric I guess. I always use OIDN with mix node having 50 or 65 factor and it always looks great for scenes without fabrics or without very fine detail patterns. But OIDN is bad if you only use it in full power but if you use it by a factor by overlaying with noised render or if you do some compositioning its really great. I agree that OIDN is so strong and smooths the picture too much. I always had more artifacts in my renders with NLM.
0 Comments
Leave a Reply. |