Adaptive QP (Adaptive QP)

Adaptive QP

Adaptive QP is to adaptively select QP for each CU to improve coding quality. The configuration parameter AdaptiveQP specifies whether to enable this feature. Not turned on by default.

("AdaptiveQP,-aq", m_bUseAdaptiveQP,false, "QP adaptation based on a psycho-visual model")

Its QP calculation principle is: choose a smaller QP for flat blocks, and choose a larger QP for blocks with higher activity .

The activity of the CU is calculated from the variance of its luminance component. For example, for a 2Nx2N CU, first calculate the variance of its 4 NxN luminance sub-blocks, and then calculate the activity actcu of the CU from the variance:

 

In order to use a larger QP in areas with high activity and a smaller QP in flat areas, it is necessary to normalize the actcu of each 2Nx2N CU in the image. Assuming that the average activity of all 2Nx2N CUs of the image f is actf, the normalized norm_actcu is calculated as follows:

 

QPA is specified by the configuration parameter MaxQPAdaptationRange, and the default value is 6.

  ("MaxQPAdaptationRange,-aqr",m_iQPAdaptationRange,6, "QP adaptation range")

The final CU QP is calculated as follows:

 

Note: The parameter MaxCuDQPDepth in the configuration file specifies the minimum CU size that can use Adaptive QP, and the default is 0. Its value should be less than the maximum CU depth.

The following code is the calculation process of Adaptive QP in HM:

/** Compute QP for each CU
 * \param pcCU Target CU
 * \param uiDepth CU depth
 * \returns quantization parameter
 */
Int TEncCu::xComputeQP( TComDataCU* pcCU, UInt uiDepth )
{
  Int iBaseQp = pcCU->getSlice()->getSliceQp();
  Int iQpOffset = 0;
  if ( m_pcEncCfg->getUseAdaptiveQP() )
  {
    TEncPic* pcEPic = dynamic_cast<TEncPic*>( pcCU->getPic() );
    UInt uiAQDepth = min( uiDepth, pcEPic->getMaxAQDepth()-1 );
    TEncPicQPAdaptationLayer* pcAQLayer = pcEPic->getAQLayer( uiAQDepth );
    UInt uiAQUPosX = pcCU->getCUPelX() / pcAQLayer->getAQPartWidth();
    UInt uiAQUPosY = pcCU->getCUPelY() / pcAQLayer->getAQPartHeight();
    UInt uiAQUStride = pcAQLayer->getAQPartStride();
    TEncQPAdaptationUnit* acAQU = pcAQLayer->getQPAdaptationUnit();
​
    Double dMaxQScale = pow(2.0, m_pcEncCfg->getQPAdaptationRange()/6.0); //!<缩放因子s
    Double dAvgAct = pcAQLayer->getAvgActivity();  //!<平均活动性
    Double dCUAct = acAQU[uiAQUPosY * uiAQUStride + uiAQUPosX].getActivity();
    Double dNormAct = (dMaxQScale*dCUAct + dAvgAct) / (dCUAct + dMaxQScale*dAvgAct); //!<归一化
    Double dQpOffset = log(dNormAct) / log(2.0) * 6.0; //!<换底公式
    iQpOffset = Int(floor( dQpOffset + 0.49999 ));
  }
​
  return Clip3(-pcCU->getSlice()->getSPS()->getQpBDOffset(CHANNEL_TYPE_LUMA), MAX_QP, iBaseQp+iQpOffset );
}

If you are interested, please follow the WeChat public account Video Coding

 

Guess you like

Origin blog.csdn.net/Dillon2015/article/details/105265832