
It's fully up to Nvidias Buisness decisions what they enable where and what makes sense for which Platform and Predicted target audience System Platform resources (nowadays gathered by Nvidia via direct telemetry system data of their target customers) ) The Encoder itself is still updated via CUDA additions like OpenCL can be used for x264 Lookahead Nvidia uses Cuda for their Lookahead additionally for their AQ and 2pass and most probably coming weight-b as well :)


Im pretty sure Nvidia also tests inside CUDA their future ASIC optimizations and have a pretty nice efficient conversion workflow from CUDA(GPU)->ASIC :) Nvidia did this in the past on many levels i remember Motion Adaptive Deinterlacing only enabled on 1 Specific Hardware and by acccident inside a Beta Driver for every Shader count, it was daunting slow and unusable on the lower Shader card though so Economicaly nonsense to keep it and possible support overhead following it and then later it became the default, as the shader power rose for the more consumer targeted cards ) The More CUDA Power the more tasks you can route their like 10 bit Decoding overhead ASIC is only of concern mostly on Power efficiency it makes the most sense for underpowered Shader cards or Mobile depending on the Target audience )
