
The Second Workshop on Intelligent and Interactive Writing Assistants
Accepted Papers
An Engineering Perspective on Writing Assistants for Productivity and Creative Code
[pdf]
Sarah D'Angelo, Ambar Murillo
show abstract
Software developers write code nearly everyday, ranging from simple straightforward tasks to challenging and creative tasks. As we have seen across domains, AI/ML based assistants are on the rise in the field of computer science. We refer to them as code generation tools or ML enhanced software developing tooling; and it is changing the way developers write code. As we think about how to design and measure the impact of intelligent writing assistants, the approaches used in software engineering and the considerations unique to writing code can provide a different and complementary perspective for the workshop. In this paper, we propose a focus on two themes: (1) measuring the impact of writing assistants and (2) how code writing assistants are changing the way engineers write code. In our discussion of these topics, we outline approaches used in software engineering, and how the disciplines of prose writing and code writing can learn from each other. We aim to contribute to the development of a taxonomy of writing assistants that includes possible methods of measurement and considers factors unique to each domain (e.g. prose or code).
What Can’t Large Language Models Do? The Future of AI-Assisted Academic Writing
[pdf]
Raymond Fok
show abstract
Large language models have revolutionized the way we interact with the world around us, yet their relative nascency suggests its transformative potential on society is still underexplored. Applications built on these models have excelled at summarizing articles, engaging in realistic conversations, and writing creative stories. However, there remain open questions in how we can design tools that effectively leverage these models to support complex, cognitive demanding, and factual writing processes. In this position paper, we consider emergent paradigms in human-AI collaborative writing and their implications on future academic writing assistants.
Writing Assistants Should Model Social Factors of Language
[pdf]
Vivek Kulkarni, Vipul Raheja
show abstract
Intelligent writing assistants powered by Large Language Models (LLMs) are more popular today than ever before, but their further widespread adoption is precluded by sub-optimal performance. In this position paper, we argue that a major reason for this sub-optimal performance and adoption is a singular focus on the information content of language while ignoring its social aspects. We analyze the different dimensions of these social factors in the context of writing assistants and propose their incorporation into building smarter, more effective, and truly personalized writing assistants that would enrich the user experience and contribute to increased user adoption.
Interactive writing systems and why small(er) could be more beautiful
[pdf]
Ibukun Olatunji
show abstract
Machine learning models can support human creativity, including tasks such as writing. This position paper explorers and critiques the design of writing systems based on Large Language Models (LLMs) and big data. The paper proposes smaller datasets as way to think about systems for underrepresented languages, types of writers, and writing tasks. In addressing these topics, the paper also considers how can we make writing assistants that are more accessible and inclusive than current state of the art LLMs .
A Situation Awareness Perspective on Intelligent Writing Assistants: Augmenting Human-AI Interaction with Eye Tracking Technology
[pdf]
Moritz Langner, Peyman Toreini, Alexander Maedche
show abstract
Intelligent writing assistants support the partial automation of the writing process. Existing research has investigated the interaction between humans and automated systems and has identified the maintenance of situation awareness (SA) as a key challenge for humans. Especially in the context of intelligent writing assistants, humans have to maintain SA as they are held responsible for the written text. We build on existing research on automated systems and human-robot/AI collaboration and their interplay with SA theory. In particular, we propose the augmentation of human interaction with intelligent writing assistants through the use of eye tracking technology. Eye tracking technology enables the non-invasive detection of SA based on eye movements. On this basis, writing assistants can be adapted to users' cognitive states such as SA. We argue that for the successful implementation of intelligent writing assistants in the real world, eye-based analysis of SA and augmentation are key.
Creative Struggle: Arguing for the Value of Difficulty in Supporting Ownership and Self-Expression in Creative Writing
[pdf]
David Zhou, Sarah Sterman
show abstract
In each step of the creative writing process, from ideation to generation to revision, authors must grapple with their creative goals and personal perspectives [10]. This self-interrogation drives both the author's sense of ownership over the output and sense of authenticity of self-expression. As ever more capable language models and generators accelerate the development of intelligent writing assistants, it is essential that designers of these tools consider how writing assistants affect perceived ownership and self-expression in the creative writing process. Here, we suggest that the role of writing assistant software should not be to remove all obstacles and frustrations, but to enable the writer to focus their efforts on the creative challenges that are the most personally fulfilling to solve. We believe that considering psychological ownership over the self-interrogation process, or the creative struggle, can be a productive way to center the writer's experience as a key design goal. A focus on psychological ownership can frame the pieces of the creative process that may be offloaded without interfering with a writer's own sense of expression. By participating in the In2Writing workshop, we hope to bring the concept of creative struggle into discussions of ownership, taxonomy, and future directions of writing assistant design.
Repurposing Text-Generating AI into a Thought-Provoking Writing Tutor
[pdf]
Tyler Taewook Kim, Quan Tan
show abstract
Text-generating AI technology has the potential to revolutionize writing education. However, current AI writing-support tools are limited to providing linear feedback to users. In this work, we demonstrate how text-generating AI can be repurposed into a thought-provoking writing tutor with the addition of recursive feedback mechanisms. Concretely, we developed a prototype AI writing-support tool called Scraft that asks Socratic questions to users and encourages critical thinking. To explore how Scraft can aid with writing education, we conducted a preliminary study with 15 students in a university writing class. Participants expressed that Scraft’s recursive feedback is helpful for improving their writing skills. However, participants also noted that Scraft’s feedback is sometimes factually incorrect and lacks context. We discuss the implications of our findings and future research directions.
Approach Intelligent Writing Assistants Usability with Seven Stages of Action
[pdf]
Avinash Bhat, Disha Shrivastava, Jin L.C. Guo
show abstract
Despite the potential of Large Language Models (LLMs) as writing assistants, they are plagued by issues like coherence and fluency of the model output, trustworthiness, ownership of the generated content, and predictability of model performance, thereby limiting their usability. In this position paper, we propose to adopt Norman's seven stages of action as a framework to approach the interaction design of intelligent writing assistants. We illustrate the framework's applicability to writing tasks by providing an example of software tutorial authoring. The paper also discusses the framework as a tool to synthesize research on the interaction design of LLM-based tools and presents examples of tools that support the stages of action. Finally, we briefly outline the potential of a framework for human-LLM interaction research.
Parachute: Evaluating Interactive Human-LM Co-writing Systems
[pdf]
Hua Shen, Tongshuang Wu
show abstract
A surge of advances in language models (LMs) has led to significant interest in using LMs to build co-writing systems, in which humans and LMs interactively contribute to a shared writing artifact. However, there is a lack of studies assessing co-writing systems in interactive settings. We propose a human-centered evaluation framework, Parachute, for interactive co-writing systems. Parachute showcases an integrative view of interaction evaluation, where each evaluation aspect consists of categorized practical metrics. Furthermore, we present Parachute with a use case to demonstrate how to evaluate and compare co-writing systems using Parachute.
Practical Challenges for Investigating Abbreviation Strategies
[pdf]
Elisa Kreiss, Subhashini Venugopalan, Shaun Kane, Meredith Ringel Morris
show abstract
Saying more while typing less is the ideal we strive towards when designing assistive writing technology that can minimize effort. Complementary to efforts on predictive completions is the idea to use a drastically abbreviated version of an intended message, which can then be reconstructed using Language Models. This paper highlights the challenges that arise from investigating what makes an abbreviation scheme promising for a potential application. We hope that this can provide a guide for designing studies which consequently allow for fundamental insights on efficient and goal-driven abbreviation strategies.
What Writing Assistants Can Learn from Programming IDEs
[pdf]
Sergey Titov, Agnia Sergeyuk, Timofey Bryksin
show abstract
With the development of artificial intelligence, writing assistants (WAs) are changing the way people interact with text, creating lengthy outputs that can be overwhelming for users. The programming field has long addressed this issue, and Integrated Development Environments (IDEs) have been created for efficient software development, helping programmers reduce the cognitive load. This experience could be employed in the development of WAs. IDEs can also be used to test assumptions about interventions that help people interact with WAs efficiently. Previous works have successfully used self-written IDE plugins to test hypotheses in the field of human-computer interaction. The lessons learned can be applied to the building of WAs.
Future Writing Assistants for Qualitative Research
[pdf]
Courtni Byun
show abstract
Qualitative analysis can be an extremely time-intensive process. Various writing assistants have been developed for qualitative analysis (QA), but they all stop short of providing a nuanced analysis of qualitative data. Future QA writing assistants could leverage large language models (LLMs) to help change this. We explore how future writing assistants using these models might benefit QA and qualitative research.
Writing Tools: Looking Back to Look Ahead
[pdf]
Cerstin Mahlow
show abstract
Research on writing tools started with the increased availability of computers in the 1970s. After a first phase addressing the needs of programmers and data scientists, research in the late 1980s started to focus on writing-specific needs. Several projects aimed at supporting writers and letting them concentrate on the creative aspects of writing by having the writing tool take care of the mundane aspects using NLP techniques. Due to technical limitations at that time the projects failed and research in this area stopped. However, today’s computing power and NLP resources make the ideas from these projects technically feasible; in fact, we see projects explicitly continuing from where abandoned projects stopped, and we see new applications integrating NLP resources without making references to those old projects. To design intelligent writing assistants with the possibilities offered by today's technology, we should re-examine the goals and lessons learned from previous projects to define the important dimensions to be considered.
Towards an Authorial Leverage Evaluation Framework for Expressive Benefits of Deep Generative Models in Story Writing
[pdf]
Sherol Chen, Carter Morgan, David Olsen, Ethan Manilow, Mark Nelson, Qiuyi Zhang, Senjuti Dutta, Piotr W Mirowski, Kory Wallace Mathewson
show abstract
What are dimensions of human intent, and how do writing tools shape and augment these expressions? From papyrus to auto-complete, a major turning point was when Alan Turing famously asked, “Can Machines Think?” If so, should we offload aspects of our thinking to machines, and what impact do they have in enabling the intentions we have? This paper adapts the Authorial Leverage framework, from the Intelligent Narrative Technologies literature, for evaluating recent generative model advancements. With increased widespread access to Large Language Models (LLMs), the evolution of our evaluative frameworks follow suit. To do this, we discuss previous expert studies of deep generative models for fiction writers and playwrights, and propose both author- and audience-focused directions for furthering our understanding of Authorial Leverage of LLMs, particularly in the domain of comedy writing.
DiaryMate: Exploring the Roles of Large Language Models in Facilitating AI-mediated Journaling
[pdf]
Taewan Kim, Donghoon Shin, Young-Ho Kim, Hwajung Hong
show abstract
In this position paper, we report our ongoing research examining the use of large language models (LLMs) in promoting mental well-being through journaling. While journaling can be beneficial for expressing personal thoughts and emotions, it can be challenging for individuals who struggle to articulate their internal states into words. LLMs have the potential to assist with this by translating users' ambiguous thoughts and experience into writing. However, using LLMs in journaling can also have drawbacks, such as neglecting the personal context of users and reducing users' initiative in writing. To explore the opportunities and challenges of using LLMs in journaling, we conducted a field deployment study using DiaryMate. The participants used the diverse sentences generated by the LLM to reflect on their past experiences from multiple perspectives and saw it as an empathetic partner. However, they gave excessive credibility to the LLM's generated sentences, often prioritizing its emotional expressions over their own. Based on the findings, we highlight the importance of considering the risks and benefits of using such technology in supporting personal reflection and emotional expression.
Do AI Writing Assistants Improve Productivity?
[pdf]
Robert E Cummings
show abstract
In the last 12 months, AI-powered writing assistants and especially AI-powered writing generators have attracted attention worldwide. To best understand the potential of writing generators, it will be helpful to first understand the impact of writing assistants. As writing assistants and generators continue to proliferate, the research community should develop clearer definitions and frameworks for both categories of AI-powered tools. By creating a taxonomy of the rapidly emerging writing generators, the second In2Writing Workshop has the opportunity to influence the reception of writing generators currently under development.
Writing with Generative AI: Multi-modal and Multi-dimensional Tools for Journalists
[pdf]
Sitong Wang, Lydia Chilton, Jeffrey Nickerson
show abstract
New generative AI models expand the design space for writing assistants. These systems together with humans form a larger creative system. The authors are building tools that can potentially help journalists write, as part of an NSF grant on the Future of News Work. At the CHI conference, a paper coming out of the grant will be presented: the paper describes a system for generating news angles and its evaluation by journalists. Building on this work, we are exploring ways of augmenting writing assistants for use in journalism. Two complementary directions are discussed here. One direction is building generative writing assistants in conjunction with image generation in order to generate storyboards. A second is building a network graph-based interface so that ideation can be explored in a semi-structured yet non-linear way.
Can AI Support Fiction Writers Without Writing For Them?
[pdf]
Jessi Stark, Anthony Tang, Young-Ho Kim, Joonsuk Park, Daniel Wigdor
show abstract
The HCI community has intensively explored the employment of AIs in story generation. However, creative writers may have mixed perceptions about their ownership of the story when there are significant AI contributions. We explore opportunities for AIs to support fiction writers without compromising their feeling of story ownership. In this paper, we present preliminary results of a formative interview study with fiction writers (N=9), focusing on their practice and the challenges of the story-writing process. We discuss some of the challenges these writers face and propose design opportunities to address these challenges in ways other than text generation.
Dimensions for Designing LLM-based Writing Support
[pdf]
Nur Yildirim, Frederic Gmeiner
show abstract
In our experience, there are three key considerations when designing LLM experiences for writing support: LLM capabilities, task complexity and output quality. In this position paper, we argue that a taxonomy of writing assistants capturing these dimensions could scaffold the process of designing experiences that writers find valuable. The remainder of this paper details each dimension and how these could inform the exploration of LLM’s design space.
Beyond Summarization: Designing AI Support for Real-World Expository Writing Tasks
[pdf]
Zejiang Shen, Tal August, Pao Siangliulue, Kyle Lo, Jonathan Bragg, Jeff Hammerbacher, Doug Downey, Joseph Chee Chang, David Sontag
show abstract
Large language models have introduced exciting new opportunities and challenges in designing and developing new AI-assisted writing support tools. Recent work has shown that leveraging this new technology can transform writing in many scenarios such as ideation during creative writing, editing support, and summarization. However, AI-supported expository writing--including real-world tasks like scholars writing literature reviews or doctors writing progress notes--is relatively understudied. In this position paper, we argue that developing AI supports for expository writing has unique and exciting research challenges and can lead to high real-world impacts. We characterize expository writing as evidence-based and knowledge-generating: it contains summaries of external documents as well as new information or knowledge. It can be seen as the product of authors' sensemaking process over a set of source documents, and the interplay between reading, reflection, and writing opens up new opportunities for designing AI support. We sketch three components for AI support design and discuss considerations for future research.
The Model is the Message
[pdf]
Isabelle Levent, Lila Shroff
show abstract
In this paper, we examine Large Language Models (LLMs) as a new kind of medium—in McLuhan’s sense of the word—and raise concerns about corporate control over language due to the centralization of LLM development. Citing examples of state, social, and commercial power over language, we explore how certain groups have historically determined the direction of linguistic evolution, and the consequences of this power. Finally, we consider language homogenization—a subset of algorithmic monoculture in which a majority of text online is generated by models owned by a small group of profit-incentivized companies—as one particular aspect of the medium that shapes readers' and writers' experiences of text production and consumption.
Using Large Generative Models for Storyboarding: Challenges and Goals
[pdf]
Zheng Ning, Dingzeyu Li, Toby Jia-Jun Li
show abstract
Storyboard creation is a valuable but tedious process in producing video content. However, with recent advances in large generative models (LGMs), we have seen great potential in using a human-AI collaboration way to facilitate this process. In this position paper, we discuss the unique characteristics of storyboarding and highlight the challenges and goals for using LGMs in this domain.
Decoding the End-to-end Writing Trajectory in Scholarly Manuscripts
[pdf]
Ryan Hyunkyo Koo, Anna Martin, Linghe Wang, Dongyeop Kang
show abstract
Scholarly writing presents a complex space that generally follows a methodical procedure to plan and produce both rationally sound and creative compositions. Recent works involving large language models (LLM) demonstrate considerable success in text generation and revision tasks; however, LLMs still struggle to provide structural and creative feedback on the document level that is crucial to academic writing. In this paper, we introduce a novel taxonomy that categorizes scholarly writing behaviors according to intention, writer actions, and the information types of the written data. We also provide ManuScript, an original dataset annotated with a simplified version of our taxonomy to show writer actions and the intentions behind them. Motivated by cognitive writing theory, our taxonomy for scientific papers includes three levels of categorization in order to trace the general writing flow and identify the distinct writer activities embedded within each higher-level process. ManuScript intends to provide a complete picture of the scholarly writing process by capturing the linearity and non-linearity of writing trajectory, such that writing assistants can provide stronger feedback and suggestions on an end-to-end level. The collected writing trajectories are viewed at https://minnesotanlp.github.io/REWARD_demo/
Augmenting Human-AI Co-Writing with Interactive Visualization
[pdf]
Md Naimul Hoque, Niklas Elmqvist
show abstract
Writing is a fundamental human activity—but today we have the opportunity to leverage Natural Language Processing (NLP) methods to help in this endeavor. Recent tools go beyond mere grammatical error-checking and use Large Language Models (LLMs) to support human-AI co-writing. While existing tools are helpful, many challenges remain: (1) mitigating ownership tensions between humans and AI; (2) enabling human autonomy in the process; (3) creating mechanisms for writers to understand and explore the reasoning of AI; and (4) applying NLP to complex and abstract narrative components (e.g., characterization, events, dialogue). In this paper, we hypothesize that some of these challenges can be resolved by introducing an communication interface between writers and AI. Further, we propose Interactive Visualization, a prominent method for making sense of complex AI reasoning and revealing hidden patterns from text data, to be that interface. To demonstrate our proposal, we present two case studies where we combine NLP and interactive visualization to support creative writing. The first case study is on mitigating social biases in fiction writing and the second is on the design of dynamic characters and scenes. We conclude by outlining our future work and broader impact.
Towards Explainable AI Writing Assistants for Non-native English Speakers
[pdf]
Yewon Kim, Mina Lee, Donghwi Kim, Sung-Ju Lee
show abstract
We highlight the challenges faced by non-native speakers when using AI writing assistants to paraphrase text. Through an interview study with 15 non-native English speakers (NNESs) with varying levels of English proficiency, we observe that they face difficulties in assessing paraphrased texts generated by AI writing assistants, largely due to the lack of explanations accompanying the suggested paraphrases. Furthermore, we examine their strategies to assess AI-generated texts in the absence of such explanations. Drawing on the needs of NNESs identified in our interview, we propose four potential user interfaces to enhance the writing experience of NNESs using AI writing assistants. The proposed designs focus on incorporating explanations to better support NNESs in understanding and evaluating the AI-generated paraphrasing suggestions.
Using writing assistants to accelerate the peer review process
[pdf]
Shiping Chen, Duncan Brumby, Anna Cox
show abstract
With the rapidly increasing number of submissions, challenges emerge in the peer review process. It is therefore necessary to support reviewers so that they can complete review tasks efficiently. By participating in this workshop, we hope to discuss and exchange ideas on how to better design writing assistants to meet the needs of reviewers and how to integrate them with existing review systems and review tools. Our vision for the future is to develop writing assistant tools that can help reviewers produce high-quality reviews in less time and with reduced workload.
Pastiches, Distributions, and Appropriations in Writing Assistants: A Squib
[pdf]
Jaylen Pittman
show abstract
Discussion of pastiches and the implications for minoritized language varieties.