Feb 29, 2024
48 Views
Comments Off on Experience, not Technology
0 0

Experience, not Technology

Written by

Designers are the right innovators for defining hybrid AI experiences, blending proactive and reactive intelligence

by Author

There are two experiential forms of AI. Reactive AI relies on people knowing how to interact with AI. Proactive AI has been common historically, suggesting results and adapting the interface without telling you it’s AI. The potential of combining both remain under-explored, & Designers are the right innovators. Here’s why and a first step towards how.

AI has generally operated in the background. It follows best practice that people care about outcomes; they shouldn’t worry about technology mechanisms. Lately, AI has been increasingly central cast into product experiences, setting expectations upfront & telling you AI is here.

AI and UX have grown from different disciplines. AI concerns itself with data and algorithms; UX concerns itself with usability and aesthetics. Overlaps become increasingly apparent when AI is called to attention in interfaces. This is why AI innovation is UX.

What’s Different About Today’s AI?

Human-Centered Machine Learning brings focus to users, but ML has already been chosen as the enabling technology. In these cases, a common trap is AI-led problem framing, similar to a hammer searching for nails.

Understanding technology allows for innovation in different layers of complexity, from shaping the underlying model to applications people use. But towards discovering valuable applications for AI, consider starting from experience, not technology.

This is a case for re-focusing on pleasurable experiences and desirable outcomes, not technology itself. We create experiences for users and technology is leveraged in service of that goal… the point of design is solving the problems that linear, mechanical thinking can’t.

No, AI user research is not “better than nothing”—it’s much worse

Approaching design from a level of abstraction beyond methods and mechanisms,

A promising approach involves hybrid AI experiences, blending proactive and reactive forms of intelligence

Hybrid experiences rely less on understanding how to interact with AI mechanisms, help people make choices, and feel natural within the user’s flow of actions.

Generally, products have quietly integrated AI without requiring user interaction. Emergent AI, particularly content generation and language understanding, mostly position AI as a central product experience, encouraging people to actively participate and make choices. The distinction is subtle, but crucial when defining experience.

In combination, I categorize AI into two overarching experiential types and provide a rare collection of AI examples demonstrating these experience types.

Reactive AI

Reactive AI is invoked through active engagement. It responds to inputs immediately, without anticipating future events. For example, a chatbot responds to questions with appropriate responses. Results are provided when people ask or take action.

In these experiences, people generally know their choices & explicitly make them. This means people may come with different expectations, moderated through further interaction with the AI system.

Other examples include voice-enabled virtual assistants, predictive text inputs, virtual game opponents, shortcut suggestions based on app navigation, visual content generation, media & content editors

Reactive AI Examples

Proactive AI

Proactive AI works its way quietly into workflows. It suggests results and adapts the interface without telling you it’s AI. Optimizations happen automatically as systems learn, appearing only during key moments.

These experiences are not expected, but people build familiarity after the first encounters. Interface elements appear in the right place and time. People don’t need to actively make choices & may even be unaware these features exist.

Examples include recommendation systems curating feeds, search engines that predict what you’re looking for, typing suggestions for your next word or phrase, the keyboard optimizing tap targets based on your typing patterns, data summaries and insights, camera illumination mapping synthetic lighting to facial features, predictive maintenance systems that anticipate equipment end-of-life, health monitoring wearables that track your habits and alert you about anomalies…

Proactive AI Examples

Two Experiential Forms of Intelligence

Proactive and reactive AI experiences have mostly been developed independently. Yet these concepts deeply influence how systems interact, & Designers are the right innovators for the task.

The potential of combining proactive and reactive AI experiences remains largely under-explored

Designers are uniquely positioned to recognize situations for hybrid AI experiences because we view the world through the eyes of the user. We anticipate situations that occur frequently, recognize handoff points between people and AI, and prioritize the appropriateness of technology to be more or less prominent.

Research shows experienced product teams struggle to identify many diverse & compelling AI examples, even when many exist…

Because the longest-running examples of AI are proactive & dynamic. They’re embedded into workflows, designed to be seamless, harder to notice. Consider HCI paradigms like adaptive user interfaces, ambient intelligence. This is a space where AI feels like contextual support.

Recent examples in content generation an language understanding mostly emphasize reactive AI, requiring active engagement with AI. Consider HCI paradigms like voice user interfaces, human-robot interaction. This is a space where AI can feel like an agent or relationship.

Combined experiences may offer incredible harmony in user flow, anticipatory actions, and reactions you need in the right place and time. These opportunities lie in experience details. They are discovered through re-framing how AI as an enabling technology can change methods, influence outcomes, simplify interaction.

This is a first step towards turning Designers into effective innovators of hybrid AI experiences.

Recognize AI by experience type. This framing can support clearer mental models about everyday intelligences.Begin with below examples, which provide a rare sample of diverse AI capabilities, grounded in mechanisms. Taxonomy is inspired by Nur Yildirim’s paper Creating Design Resources to Scaffold the Ideation of AI Concepts.

Together, they offer a starting point for identifying complementary opportunities to turn AI mechanisms into valuable product experiences.

Media Creation & EditingGenerative AI is just scratching the surface regarding its potential to craft AI-enhanced features… creating chatbots significantly underestimates its capacity to revolutionize product features across various domains.

Designing genAI-enhanced features

1. Media Creation & Editing

Proactive AI can automate repetitive tasks, suggest edits and adapt to user preferences, while reactive AI can generate content and produce revisions

Proactive AI: To identify media, systems can recognize content in images, estimate similarity between content, and find key moments in photo albums and videos and label them. Systems then apply visual enhancements like cropping, lighting, color and bring people into focus while blurring the background. They can stabilize shaky footage. Systems can generate highlight reels, montage sequences, or templates to start creative projects, then optimize compositions to fit different aspect ratios of different platforms.

Reactive AI: Systems can process user inputs such as adding filters, removing backgrounds, expanding photos and videos, enhancing quality, and add or remove objects. They can generate virtual effects on people’s face and body. Then, people can fine-tune edits according to their preferences or invoke AI again for further edits.

Case Studies: Portrait Light, Google Photos

Autonomous Navigation

Machines can “see” and make sense of things, converting visual data into actionable knowledge.

Artificial Intelligence and Machine Learning: AI in Robotics

2. Autonomous Navigation

Proactive AI anticipates and plans routes, while reactive AI responds to dynamic changes in the environment to ensure safe navigation

Proactive AI systems can suggest optimal routes & estimate time to get to your destination. During navigation, they anticipate upcoming changes in road conditions, detect obstacles on the road, predict traffic patterns & weather conditions. They may alert people about changes and compare alternative routes. When accidents occur, AI systems can detect incidents and immediately call for help.

Reactive AI systems can react to obstacles and potential collisions, applying emergency braking, steering, and re-routing to safety. People can invoke assistive actions like parallel parking, lane switching, autonomous speed control. Systems assist in maneuvering, detecting people & objects, estimate size of parking spaces, and adjust vehicle trajectory.

Related: Tesla AI & Robotics, Skydio Autonomy

Writing Assistance

3. Writing Assistance

Proactive AI anticipates needs and takes initiative without explicit input, while reactive AI provides real-time feedback and corrections

Proactive AI: Systems can predict the next phase in a sentence, recommend related keywords, prompt users with contextual explanations & tutorials. To optimize results, systems can generate publishing schedules based on audience demographics. To personalize, systems can learn user speech patterns and vocabulary to make corrections to transcription, detect changes in writing patterns, recognize how people type over time to optimize the tap area for each key & reduce mistakes.

Reactive AI: To unblock writers, systems can generate code, templates, and content to get started. To assist, systems can identify auto-corrections on spelling & grammar, discover topics in documents, provide translations, identify context and the writer’s sentiment, detect changes in writing patterns or plagiarism. When prompted, systems can respond to user requests such as find relevant information, or summarize documents.

Related: Evaluating Microsoft Copilot on 10 Usability Heuristics, Read AlongTune

Health & Activity Monitoring

4. Health & Activity Monitoring

Proactive AI: Systems can suggest lifestyle plans, forecast potential health issues and medicine schedules, identify the cancer type in medical imaging, recommend prevention measures and mitigation strategies of chronic conditions. Over time, they can provide insights into health status, recommend personalized interventions and treatments, and discover a person’s routine for energy optimization.

Reactive AI: Systems can detect physical activity as they occur, monitor sleep quality, identify abnormal vital signs. They can come up guidance for coping with stress and anxiety, coaching for rehabilitation, generate plans for diet and nutrition, hydration reminders, and emergency response coordination for sudden health emergencies.

Case Studies: Blood Oxygen Saturation Tracking, Detect AI Skin Conditions, BenchSci, Apple Healthcare, Fall Detection

Audio & Voice Recognition

5. Audio & Voice Recognition

Proactive AI offers personalizations or automations, while reactive AI responds to user commands and executes tasks

Proactive AI: Systems can offer playlist suggestions by anticipating past habits, mood, and time of day. Search assistants can predict queries and offer answers before people finish speaking. Systems can estimate the direction sound came from in a smart speaker, automatically annotate a recording, predict anomalies in industrial equipment or evnironments.

Reactive AI: Systems can transcribe speech to text, execute a person’s voice commands, detect voices from audio clips, adjust smart home devices including lighting, temperature when a person arrives home. Systems can support real-time dictation, react to spoken input and dynamically translate speech, or generate missing synthetic data based on a sample of your voice.

Related: Personal Voice on iPhone (Video), Boosting Conversation Rates, AI Music Generators

Design Considerations

Proactive AI anticipates and predicts. Reactive AI responds in the moment.

Because proactive AI provides unsolicited results, people may be less tolerant of low-quality information

When offered undesirable results, people may trust the AI system less. For example, maps recognizes that you typically go to the gym weekdays during lunch. But when driving with someone else, you may prefer personal schedules to be hidden. Anticipating this, proactive AI can behave differently in social settings, turning off suggestions or adapting the experience with privacy checkpoints.

Personalized well, proactive AI may offer incredible convenience. Imagine opening your wallet app to pay for Starbucks, and instantly get redirected to the Starbucks app with your remaining gift card balance. Now imagine driving to a business meeting in a new city, and your navigation system not only guides you to the location, but notifies the organizer of your ETA.

Because reactive AI provides results requested explicitly, people come with initial expectations, further moderated through ongoing interaction with the AI system

For objective tasks such as autonomous parking, people anticipate flawless performance. For convenience tasks like voicemail transcriptions, minor inaccuracies can lead to dissatisfaction. People may express satisfaction with 90% accuracy in theory, but that means every 10th word is incorrect in reality. For subjective tasks like writing and video editing, the gap between people’s expectations and AI output involves more iterative interaction.

Still, low-quality results can degrade user trust in AI, leading them to alternative solutions or more controlled & manual ways to get things done.

Research methods more thoroughly described here. Other design considerations for ML overviewed in Apple’s Human Interface Guidelines, and Google’s People + AI Guidebook.

The power of combining proactive and reactive AI lies in their complementary abilities to anticipate and respond to needs

Fitting within the user’s flow, automating repetitive tasks, offering in-the-moment convenience, & saving time, effort, attention.

This framing shifts away from AI technology mechanisms, instead emphasizing user experience. Intelligence is valuable when it offers better or different ways to do things within established categories, even new categories.

People’s familiarity with AI also evolve with new product experiences. What’s out of the box can’t be put back in. Uniquely, Design is closest to the user & well positioned to identify opportunity and appropriateness for AI interaction.

Looking forward to continue the discussion

Elaine designs human-AI interactions for robotics, with experience in AI/ML consumer and enterprise products, ML tools, & research in interaction design

Thanks for reading!

Experience, not Technology was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.