Speech recognition and large language models have advanced at an impressive pace. AI systems can now transcribe and respond to spoken language with high accuracy. Yet these capabilities depend on controlled conditions, typically involving a single speaker and clear turn-taking. Once multiple people enter the conversation, the rules change. Interruptions, overlapping voices, and shifting focus introduce a level of complexity that most conversational AI is not designed to handle.
What Makes Multi-Person Dialogue So Difficult
Human group conversations are fluid and unpredictable. Speakers cut in, change topics mid-sentence, or speak without directly addressing everyone in the room. For AI, this creates a cascade of decisions that go far beyond recognizing words. The system must determine which voice matters, whether a response is expected, and if speaking at all would be appropriate. Without this judgment, AI risks becoming disruptive rather than helpful.
Most existing voice systems avoid this problem by assuming a single user. They react to wake words or commands without understanding the broader conversational context, which works in isolation but breaks down in shared environments.
Selective Attention: A Different Way to Think About Conversation
An emerging concept in conversational AI is “Selective Attention”. Instead of processing every sound equally, selective attention enables AI to continuously decide where its focus should be. This includes identifying the most relevant speaker, tracking conversational intent, and recognizing moments when silence is the correct response.
This approach shifts conversational success away from constant replies and toward contextual awareness. In many real-world interactions, knowing when not to speak is just as important as knowing what to say.
Attention Labs’ CES 2026 Demonstration
At CES 2026, Toronto-based startup Attention Labs presented a system built specifically around this idea. Rather than showcasing better transcription or faster responses, the company focused on attention management in group settings. Their on-device AI was demonstrated live in an unscripted environment, where several people spoke naturally around a robot equipped with the attention engine.
The system did not respond automatically to every voice. It evaluated conversational cues and engaged only when appropriate. It remained quiet when the context suggested it should. All of this processing occurred locally on the device, without relying on cloud infrastructure.
Why Silence Is a Feature, Not a Failure
One of the key insights behind Attention Labs’ approach is that conversational AI should treat silence as an intentional outcome. In homes, vehicles, offices, and social spaces, unnecessary responses can interrupt discussions or create confusion. Traditional assistants often struggle here and respond mechanically to triggers without understanding group dynamics.
After incorporating selective attention, AI systems can behave more like human participants. They can observe first and act only when their input adds value.
Recognition and Industry Implications
Attention Labs’ work earned a CES Picks Award (TechRadar Pro), which signals growing interest in conversational systems that move beyond single-user assumptions. The recognition reflects a broader industry shift toward AI designed for shared physical environments rather than isolated interactions.

Where This Technology Matters Most
As AI becomes embedded in robotics, smart devices, and in-vehicle systems, the ability to manage multi-person interaction is becoming foundational. A robot in a workplace, a smart assistant in a living room, or an AI system in a car must all navigate conversations involving more than one voice. Selective attention provides a pathway for these systems to operate smoothly without dominating the interaction.
The Road Ahead for Conversational AI
Solving multi-person conversations is a requirement for AI to function naturally alongside humans. While perfect group-aware AI remains a work in progress, attention-driven systems mark a step forward. With a focus less on constant output and more on contextual awareness, conversational AI is beginning to align more with how people actually communicate.
As research and real-world deployments continue, selective attention will become a core component of future conversational systems. It would enable AI to listen more intelligently, respond more thoughtfully, and fit more seamlessly into shared human spaces.