Scientists are exploring how large language models (LLMs) can assist with the complex task of controlling dynamic planetarium visualisations during live science centre shows. Mathis Brossier, Mujtaba Fadhil Jawad, and Emma Broman, all from Linköping University, Sweden, alongside colleagues including Julia Hallsten from Visualisering Center C and Alexander Bock, investigated the feasibility of using LLMs as ‘pilots’ within the OpenSpace software , traditionally a role fulfilled by a human co-presenter. Their research, involving seven professional planetarium guides, demonstrates that while LLMs currently lack the nuanced skills of experienced human pilots, they can function effectively as ‘co-pilots’, reducing workload and enabling multitasking. This work represents a significant step towards automating aspects of live visualisation, potentially enhancing the immersive experience for audiences and streamlining show delivery , opening exciting new avenues for public engagement with science.

AI piloting enhances planetarium show delivery

This research addresses a critical role typically performed by a human, piloting the visualization in close collaboration with the on-stage guide. The AI-pilot functions as a conversational agent, actively listening to the guide’s spoken instructions and interpreting them as commands to manipulate camera angles, adjust simulation time, or activate visual elements within the planetarium environment. Researchers implemented a system where the AI agent could operate in either a reactive mode, responding only to direct commands, or a proactive mode, where it anticipates needs and intervenes with implicit queries while the guide is speaking. Noteworthy is the focus on interaction autonomy, moving beyond the limitations of standard LLM assistants which require explicit prompting for every action.
This breakthrough establishes a foundation for more nuanced and collaborative interactions between humans and AI in live, immersive environments. By comparing reactive and proactive AI modes, the scientists provide valuable insights into the design of truly collaborative AI systems, paving the way for future research into visualization piloting and the development of more sophisticated AI co-pilots capable of seamlessly integrating into live show settings. The. Data shows the team successfully implemented an LLM-based interaction system within OpenSpace, a widely used astrophysics visualization tool for live shows involving a presenter and a visualization pilot.

To explore the nuances of proactive AI, the study compared a reactive mode, where the AI responds solely to direct queries, with a proactive mode, where the agent intervenes based on its assessment of the situation, even without explicit prompting from the guide. Noteworthy, the experts, with their extensive experience in human piloting, were able to provide comparative observations against the two AI conditions. Measurements confirm the successful integration of conversational AI into OpenSpace, allowing for dynamic control of camera motions, simulation time, and visual elements. The breakthrough delivers a foundation for further research into interaction autonomy, addressing the current limitation of LLMs requiring user prompts before generating responses. This work proposes a path towards developing AI systems capable of proactive intervention and autonomous reasoning within the context of live, educational experiences.

LLMs as OpenSpace Co-Pilots for Presentations offer dynamic

Scientists have investigated the potential of large language models (LLMs) to function as pilots within planetarium visualization software, specifically OpenSpace, during live science centre presentations. Results indicate that while LLMs currently lack the nuanced skills required to fully replace human pilots, they demonstrate promise as ‘co-pilots’ capable of reducing workload and enabling multitasking for experienced presenters. This work establishes that LLMs can interpret verbal cues to manage camera movements, adjust simulation time, and modify visual elements within the planetarium environment, though not without limitations. The authors acknowledge that current LLMs require significant pre-programming for specific shows and struggle with truly novel situations, necessitating substantial preparation and software knowledge.

However, the potential for LLMs to assist in show preparation, testing, and brainstorming in a low-risk setting is noteworthy. Future research should focus on developing more autonomous LLMs capable of discrete interaction with visualizations, facilitating seamless integration into live events and easing the onboarding process for new pilots. The study concedes that LLMs are unlikely to ever fully replicate the adaptability and expertise of human guides. Nevertheless, the findings suggest a valuable role for these systems as supportive tools, enhancing the efficiency and capabilities of live planetarium presentations. Further development towards greater autonomy could unlock new possibilities for interactive and engaging science communication experiences.