PlurVA-LLM @ AACL 2026

Welcome to the First Workshop on Pluralistic Value Alignment of LLMs (PlurVA-LLM)!

AACL-IJCNLP 2026 · The First Workshop on Pluralistic Value Alignment of LLMs

PlurVA-LLM

Pluralistic Value Alignment of Large Language Models

Multiple Values. Multiple Languages. Unified Alignment.

Welcome to the First Workshop on Pluralistic Value Alignment of LLMs (PlurVA-LLM) !

Large language models (LLMs) have achieved remarkable progress and are widely used in applications such as content generation, information retrieval, and decision support. As these systems are deployed globally, concerns have emerged about the values and norms implicitly embedded in them. Beyond technical performance, LLMs may encode cultural assumptions and biases that influence user decisions, shape societal perceptions, and fail to reflect the diversity of human values across regions and communities. Addressing this challenge requires pluralistic alignment, which aims to develop language models that can recognize, reason about, and adapt to diverse value systems across different cultural and social contexts.

The PlurVA-LLM workshop aims to achieve two primary goals:

First, it aims to provide a dedicated venue for advancing research on pluralistic value alignment in large language models. We invite submissions from researchers in NLP, machine learning, AI safety, social science, philosophy, and related fields. Topics of interest include the theoretical foundations and formalizations of pluralistic value alignment, methods for aligning LLMs with diverse value systems, benchmarks and evaluation protocols, human AI collaboration for constructing value sensitive datasets, interpretability and analysis of value alignment, applications in downstream systems and real world deployment, as well as multilingual, multicultural, low resource, and multimodal perspectives. The workshop will feature invited keynote talks, a panel discussion, and oral and poster sessions where accepted papers will be presented.

Second, it will host a shared task, which evaluates whether large language models can make locally grounded value judgments across different cultural contexts. The task focuses on three countries, China, Indonesia, and Sri Lanka, using benchmarks that reflect locally defined social value frameworks, including Chinese daily life dilemmas, Indonesian dilemmas grounded in the Pancasila values, and Sri Lankan value judgment tasks in English and Sinhala. A further track examines pluralistic value transfer, testing whether models aligned with one value system can generalize appropriately to others rather than defaulting to a single generic norm. The challenge is open to a wide range of approaches, including fine tuning, prompting based systems, and agent or tool augmented methods.

  • Venue Hengqin, China (with AACL 2026)
  • Workshop date 10 Nov 2026 (draft)
  • Format Hybrid · 1 day

News

Date Announcement