본문으로 건너뛰기
Juhyeon's Blog
Search
검색
다크 모드
라이트 모드
탐색기
태그: RLHF
4건의 항목
2026년 4월 13일
Alignment Faking in Large Language Models
paper
alignment_faking
self_preservation
AI_safety
RLHF
strategic_deception
FSPM
instrumental_convergence
Anthropic
2026년 4월 13일
Discovering Language Model Behaviors with Model-Written Evaluations
paper
LLM_evaluation
inverse_scaling
sycophancy
self_preservation
instrumental_convergence
RLHF
AI_safety
model_written_evaluation
FSPM
2026년 4월 13일
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
paper
RLHF
AI_Safety
Reward_Model
Survey
Alignment
Governance
FSPM_confound
2026년 4월 13일
The Alignment Problem from a Deep Learning Perspective
paper
alignment
instrumental_convergence
deceptive_alignment
reward_hacking
power_seeking
situational_awareness
RLHF
AI_safety
FSPM
ICLR2024