Leaky ReLU

  • ReLU 보완형.
  • Does not saturate(i.e., will not die)
  • Closer to zero-centered outputs
  • Leads to fast convergence
  • Computationally efficient

Parametric ReLU

좀 더 generalized 된 버전이고, Leaky에서 0.01에 해당하는 것 조차 모델이 학습시킴.

  • Does not saturate(i.e., will not die)
  • Parameter learned from data
  • Leads to fast convergence
  • Computationally efficient