Abe Fetterman
About Abe Fetterman
Abe Fetterman is a Member of Technical Staff who specializes in the ethical decision-making of large language models (LLMs), co-authoring significant research in the field.
Title at Current Position
Abe Fetterman holds the title of Member of Technical Staff. Within this role, he contributes to ongoing research and development in the specialized domain of ethical decision-making in large language models (LLMs). His contributions encompass both theoretical explorations and applied engineering projects.
Research on LLM Ethical Decision-Making
Abe Fetterman has actively contributed to research centered around LLM ethical decision-making. His work includes analyzing how LLMs assess ethical scenarios, and understanding the differences in mistake patterns between human evaluators and AI models. He has focused specifically on the challenges and potential of LLMs like GPT-4 in accurately navigating reworded ethical scenarios.
Co-authored Publications
Abe Fetterman co-authored the publication titled 'Update: The state of LLM ethical decision-making.' This work details advancements and insights into how large language models approach ethical choices and the ongoing improvements in this area. It serves as a significant resource for peers and practitioners in the field.
Human Evaluators and LLMs in Ethical Scenarios
In his research, Abe Fetterman participates in testing both human evaluators and LLMs on various ethical scenarios. This comparative analysis has identified distinct reasons why humans and LLMs might err in their assessments. His findings aim to enhance the reliability and fairness of LLM-based decisions.
Implications of LLMs in Ethical Decision-Making
Abe Fetterman explores the broader implications of LLMs potentially making ethical decisions in place of humans. His research delves into understanding where LLMs succeed and fail, and what these successes and failures mean for future applications in society and technology. This ongoing work seeks to balance theoretical insights with practical outcomes.