The First Workshop on Intelligent and Interactive Writing Assistants
The purpose of this interdisciplinary workshop is to bring together researchers from the natural language processing (NLP) and human-computer interaction (HCI) communities as well as industry practitioners and professional writers to discuss innovations in building, improving, and evaluating intelligent and interactive writing assistants. We plan to alternate our workshop venue between an NLP conference and a HCI conference every year to facilitate collaboration.
This year the workshop will be held at ACL 2022 in Dublin, Ireland on the 26th of May, 2022.
The workshop is expected to be hybrid, unless the pandemic situation dictates otherwise, due to events outside our control.
- Paper submission deadline: 28 February, 2022
- Paper acceptance notification: 26 March, 2022
- Paper camera-ready deadline: 10 April, 2022
- Workshop date: 26 May, 2022
Call for Papers
The purpose of this interdisciplinary workshop is to discuss innovations in building, improving, and evaluating intelligent and interactive writing assistants. To this end, we hope to bring together researchers from the natural language processing (NLP) and human-computer interaction (HCI) communities, as well as industry practitioners and professional/amateur writers. We have seen an enormous shift in the ability of intelligent writing assistants in the past decade, from spell checkers to paraphrasers to content generators. We anticipate many new forms of human-machine collaborative writing, where machines provide creative and innovative support to enhance human potential, to appear in the near future.
In recent years, AI-powered writing assistants have gained increasing traction, along with the development of pre-trained large language models such as GPT-3 (Brown et al., 2020) and Jurassic-1 (Lieber et al., 2021). A growing body of NLP research is improving and scaling up such models for text generation. However, the common practice for building and evaluating these models in NLP has several major shortcomings in the context of human-machine collaborative writing: (1) it does not take into account the bidirectional, interactive nature of human-machine interactions, (2) the standardized evaluation metrics or “gold standards” do not exist for open-ended tasks such as creative writing, and (3) it overlooks real-world deployment challenges and human aspects of evaluation such as usability, engagement, ownership, and satisfaction.
To envision more supportive intelligent and interactive writing assistants, we invite researchers from the NLP and HCI communities, industry practitioners, and writers to participate in this workshop. Leveraging various NLP techniques while accounting for their limitations requires researchers, practitioners, and writers to holistically understand language model capabilities as well as their strengths and weaknesses in various collaborative contexts. Through this workshop, we hope to enhance our understanding of the design and evaluation of writing assistants and the creative process between humans and machines, while being informed by the needs of writers.
We invite submissions from the NLP and HCI communities as well as industry practitioners and professional writers on the topic of intelligent writing assistants: those that discuss innovations in building, improving, and evaluating intelligent and interactive writing assistants.
Specific topics include, but not limited to:
- Combining NLP techniques (e.g. style transfer, text planning, controllability) with interaction paradigms between users and writing assistants (e.g. interfaces, iterative processes, feedback), such as a formality style transfer system for revising professional communications
- Assistance on different stages of the writing process (e.g. planning, revising), different types of writing (e.g. expository, persuasive), and different applications (e.g. journalism, fiction)
- Evaluation methodologies for writing assistants, writing process, and resultant text
- Addressing underrepresentation of languages, types of writers (e.g. vernacular variations), and writing tasks for targeted writing assistance (note that for non-English systems, we request that the figures and examples be translated into English prior to review)
- Writing assistant ownership issues, including legal issues with copyright and psychological sense of ownership
- Practical challenges for building real-world systems such as Grammarly and WordTune (e.g. latency, near-perfect quality, personalization, and evolution of language)
- User studies or ethnographic studies of writers who use writing assistants
- Demonstration of simple prototypes of intelligent interfaces or design sketches
We invite both regular paper submission and system demos up to 8 pages, but they do not need to be always 8 pages long. There is no formal distinction between long and short papers; instead, we expect the paper length to be commensurate with the contributions. Specifically, we allow four types of submissions:
- Standard workshop papers: Submissions describing substantially original research not previously published in other venues.
- Extended abstracts: Submissions describing preliminary but interesting ideas or results not previously published in other venues.
- Cross-submissions: Papers on relevant topics that have previously been accepted for publication in another venue.
- Demonstration: Demonstrations of all forms. Can be research and academic demos, but also those of products, interesting and creative projects, and so forth.
Note that submissions totally irrelevant to the topics will not be reviewed.
All papers except cross submissions will go through double blind peer review. Cross submissions will go through single blind peer review. If the paper is accepted, at least one of the authors must attend the workshop to present the work.
You will need to register your email with CMT in order to submit. For instructions on how to open an account, see here. When submitting a paper, you will be asked to select a subject area (either NLP or HCI) in the CMT website. This information is used to match suitable reviewers to your paper.