Schedule

This may be updated before the workshop.

Location

Please check the conference website for the latest information: muc2025.mensch-und-computer.de

09:00 CEST – Welcome

The first 30 minutes is about getting together to know each other and talking about the expectations for the workshop.

  • The hosts will introduce themselves and their work, and focus topics.
  • We will give all participants the opportunity to introduce themselves and their fokus of work.
  • We will collect voices what people expect from the workshop.

09:30 CEST – Inspiration

We will start the workshop with an inspiring session that sets up the remainder of the workshop.

Cansu Demir, Alexander Meschtscherjakov, Magdalena Gärtner – How In-Vehicle Intelligent Agents Redefine SAE Level 5 Autonomy

As we move towards SAE Level 5 automation, vehicles no longer need drivers - but passengers have to feel safe, informed, and in control. In this talk, I explain how voice-based in-vehicle intelligent agents (IVIAs) can serve as social and informational bridges between people and autonomous systems. In a mixed-methods study using the voice-only agent Iris within a high-fidelity driving simulator, we examined how different types of agent-provided information - critical, relevant, and personalized - affect trust, acceptance, and engagement. We discover that drivers are best served by direct and timely vehicle status and safety incident reports. Personalized content is more comfortable, however, but questions transparency and privacy - a trade-off between emotional resonance and ethical deliberation, a defining conflict of the double challenge of designing emotionally engaging and ethically responsible agents. Notably, respondents indicated sufficient trust to perform non-driving tasks, but desired more flexible, multimodal feedback to contextualize the agent’s decisions. This study challenges us to rethink how we form connections with autonomous vehicles - not as machines, but as talkative companions. Through integrating emotional intelligence and situational awareness into IVIA design, we can establish not only usability, but trust and peace of mind in tomorrow’s transportation experience.

Mathias Haimerl – Is automation making traffic more exclusive?

Traffic has never been easy for people with special needs. Blind people need to rely on acoustic queues in a very noisy environment. Children and wheelchair drivers easily get obstructed behind parking cars. With rising automation levels, the amount of AI systems used in traffic components will be rising. While AI approaches are being used to reduce barriers, it holds potential to introduce new ones: Limited training data and the inherent operating mode of machine learning often renders people with disabilities as outliers. Trajectories of people in wheelchairs using their feet to push themselves backwards are not correctly detected or are detected as trash bins. Interaction queues are too complicated for cognitively impaired people to understand. But also social aspects of automation may influence the inclusivity: Today, people with cognitive impairments in rural areas, as well as dialysis patients, are using taxis daily. With higher volumes of AVs, taxis may vanish in the long term. A missing prosocial counterpart to the client may influence the accessibility of AVs drastically. We would like to explore the pro- and antisocial aspects of vehicle automation and investigate the impact on accessibility to prevent future traffic to get increasingly exclusive.

10:30 CEST – Coffee Break

11:00 CEST – Exploring the potential of AI

Alice Rollwagen – AI-Driven Interfaces to Enhance Accessibility and Safety for VRUs

Recent HCI research suggest that dynamically adaptivity can markedly broaden accessibility by accommodating diverse and fluctuating user abilities and situational contexts. Artificial intelligence enables systems to flexibly adapt to users’ needs. Nevertheless, because its deployment may elicit discomfort or skepticism among users, the discussion about “AI‑Driven Interfaces” will examine how adaptive user interfaces and human‑centered AI can jointly unlock new levels of accessibility while ensuring responsible adoption. For Vulnerable Road Users in particular, AI-driven adaptivity offers a remarkable opportunity to overcome persistent accessibility barriers. We will probe three questions: (1) Which mobility scenarios could be made more accessible for which VRU groups by AI-driven interfaces? (2) Through which types of information provided by AI-supported interfaces? (3) Which concerns may arise when AI-supported interfaces deliver information to enhance accessibility in mobility scenarios? The session will capture expert perspectives, surface open research questions, and map critical gaps to shape the next wave of AI‑driven Interfaces for VRUs.

Muhammad Umair Meo – Inclusive External Human–Machine Interfaces (eHMIs) for Automated Vehicles: A Human-Centered Perspective

The continuous development and deployment of autonomous vehicles (AVs) are changing the way people get around in cities. As autonomous cars share the road with more and more humans, a big problem arises: How can AVs let all those who walk, cyclists, and other road users know what they want to do, particularly those who have disabilities that affect their vision, hearing, or mobility? Traditional vehicles rely heavily on nonverbal signals such as eye contact, hand gestures and driver body language to signal intent to pedestrians. In the absence of a human driver, AVs must assume this communication responsibility through external human-machine interfaces (eHMIs). However, most existing eHMI designs lean heavily on visual or auditory output, which does not adequately serve people with sensory or cognitive limitations. This paper explores the transformative role of generative artificial intelligence (AI)—such as large language models and AI-driven media generators—and agentic AI, which is capable of making autonomous decisions in real time based on user and environmental context. These technologies offer promising opportunities to create inclusive, adaptive, and intelligent eHMIs that go beyond static signals. Instead of sending generic messages, these systems can change how they convey them in real time based on who is nearby, how they might interpret signals, and the present circumstance. Our studies explore the modern-day reputation of improvements in eHMI generation and identify crucial regions wherein enhancements in accessibility, flexibility, and practicality for real-world packages is necessary. We recommend an progressive AI-pushed framework for growing digital Human-Machine Interfaces (eHMIs) that could make use of numerous remark modalities, which include visual, aural, and tactile inputs, own environmental awareness, and alternate primarily based totally on consumer interplay over time. We Additionally, ensure that our method suits the dreams of the Accessible Automated Automotive Workshop Series (A3WS), which is an international attempt to make transportation less complicated for every person to use. Our studies is to facilitate collaborative projects geared toward enhancing the safety, accessibility and fairness of self-sufficient car structures for all users, mainly the ones who’ve been traditionally marginalized in technological advancement.

12:30 CEST – Lunch Break

14:00 CEST – Interactive

We want to establish a common understanding of the key challenges in current AV-VRU research. We will revise open questions from submitted workshop papers to derive topics for a break-out session. The participants will be split into multiple groups that discuss a topic. Possible subjects might be:

  • What are the main blockers hindering researchers to collaboratively work with PSN?
  • How can we better include PSN into development and design?
  • What are inclusive designs in the automotive context that can leverage today’s advanced technology to support PSN? After an energizer, we will prioritize the identified challenges and discuss ways towards solutions. The relevance of each research opportunity/challenge will be discussed in a plenum.

Mark Colley – Walking in their shoes: Using simulators to improve immersion for disabilities

To investigate traffic automation for people with visual impairments, Mark’s team developed VIP-Sim, a Vision Impairment Simulator for accessible design. For a new study, a new simulator was developed. We invite all participants to immerse in impaired peoples worlds by walking some steps in their shoes.

15:00 CEST – Wrap-Up and Next Steps

The goal of this workshop is to reach a common understanding of research gaps and opportunities as well as requirements for inclusive AV-VRU interaction. The last part is dedicated to wrapping up the workshop’s outcomes, plan the next steps, and discuss where to publish possible results. We want to formulate a research agenda to support initiatives in research and practice to strengthen the accessibility of today’s and future automotive designs

After the workshop

The discussion on the next steps can, of course, be continued after the workshop, e.g. in our Slack group. Depending on the attendee’s preferences, we may go out for a shared dinner together.