- Dialog system
-
A dialog system or conversational agent (CA) is a computer system intended to converse with a human, with a coherent structure. Dialog systems have employed text, speech, graphics, haptics, gestures and other modes for communication on both the input and output channel.
What does and does not constitute a dialog system may be debatable. The typical GUI wizard does engage in some sort of dialog, but it includes very few of the common dialog system components, and dialog state is trivial.
Contents
Components
There are many different architectures for dialog systems. What sets of components are included in a dialog system, and how those components divide up responsibilities differs from system to system. Principal to any dialog system is the dialog manager, which is a component that manages the state of the dialog, and dialog strategy. A typical activity cycle in a dialog system contains the following phases[1]:
- The user speaks, and the input is converted to plain text by the system's input recognizer/decoder, which may include:
- automatic speech recognizer (ASR)
- gesture recognizer
- handwriting recognizer
- The text is analyzed by a Natural language understanding unit (NLU), which may include:
- Proper Name identification
- part of speech tagging
- Syntactic/semantic parser
- The semantic information is analyzed by the dialog manager (see section below), along with a task manager that has knowledge of the specific task domain.
- The dialog manager produces output using an output generator, which may include:
- natural language generator
- gesture generator
- layout engine
- Finally, the output is rendered using an output renderer, which may include:
- text-to-speech engine (TTS)
- talking head
- robot or avatar
Dialog systems that are based on a text-only interface (e.g. text-based chat) contain only stages 2-4.
Dialog manager
The dialog manager is the core component of the dialog system. It maintains the history of the dialog, adopts certain dialog strategy (see below), retrieve the content (stored in files or databases), and decides on the best response to the user. The dialog manager maintains the dialog flow.
The design of the dialog manager evolves over time.
- finite-state machine
- frame-based: The system has several slots to be filled. The slots can be filled in any order. This supports mixed-initiative dialog strategy.
- information-state based
The dialog flow can have the following strategies:- System-initiative dialog: The system is in control to guide the dialog at each step.
- Mixed-initiative dialog: Users can barge in and change the dialog direction. The system follows the user request, but tries to direct the user back the original course. This is the most commonly used dialog strategy in today's dialog systems.
- User-initiative dialog: The user takes lead, and the system respond to whatever the user directs.
- Learned strategy: the system's next dialogue action is chosen based on an optimisation method such as Reinforcement Learning
The dialog manager can be connected with an expert system to give the ability to respond with specific expertise.
Types of systems
Dialog systems fall into the following categories, which are listed here along a few dimensions. Many of the categories overlap and the distinctions may not be well established.
- by modality
- by device
- by style
- command-based
- menu-driven
- natural language
- speech graffiti
- by initiative [2]
- system initiative
- user initiative
- mixed initiative
Applications
Dialog systems can support a broad range of applications in business enterprises, education, government, healthcare, and entertainment.[3] For example:
- Responding to customers' questions about products and services via a company’s website or intranet portal
- Customer service agent knowledge base: Allows agents to type in a customer’s question and guide them with a response
- Guided selling: Facilitating transactions by providing answers and guidance in the sales process, particularly for complex products being sold to novice customers
- Help desk: Responding to internal employee questions, e.g., responding to HR questions
- Website navigation: Guiding customers to relevant portions of complex websites --a Website concierge
- Technical support: Responding to technical problems, such as diagnosing a problem with a product or device
- Personalized service: Conversational agents can leverage internal and external databases to personalize interactions, such as answering questions about account balances, providing portfolio information, delivering frequent flier or membership information, for example
- Training or education: They can provide problem-solving advice while the user learns
- Simple dialog systems are widely used to decrease human workload in call centres. In this and other industrial telephony applications, the functionality provided by dialog systems is known as interactive voice response or IVR.
In some cases, conversational agents can interact with users using artificial characters. These agents are then referred to as embodied agents.
Toolkits and architectures
A survey of current frameworks, languages and technologies for defining dialog systems.
Name & Links System Type Description Affiliation[s] Environment[s] Comments AIML Chatterbot language XML dialect for creating natural language software agents Richard Wallace There is a wikidot site for free AIML code. Virtual Assistant platform Teneo platform Enterprise platform used by over 100 Virtual Assistant at companies such as IKEA, Telenor, Slovenian Tax Authorities, WeBank, Canal Digital, Försäkringskassan, etc. Artificial Solutions Windows, Linux There is a corporate site for information. ChatScript
website
articleChatterbot language Bruce Wilcox Windows, Linux Winner of the 2010 Loebner Chatterbot competition RiveScript
website
articleChatterbot language Much shorter than AIML, with additional features Casey Kirsle Perl Personality Forge
websiteChatterbot host Web Pandora Bots
websiteChatterbot host Web TrindiKit
website
articleInformation State Update Goteborg University
TALK project
Staffan Larsson
David TraumSicstus Prolog DIPPER
website
articleInformation State Update Edinburgh LTG
Stanford CSLI
Johan Bos
Ewan Klein
Oliver Lemon
Tetsushi OkaSicstus Prolog, OAA Part of DUDE.
The DIPPER Dialog Move Engine is a stripped-down version of the DME delivered as part of the TrindiKit.RavenClaw
website
articlespoken, plan-based Carnegie Mellon University
Microsoft
Dan Bohus
Alexander RudnickyWindows with Visual Studio, C++ and Perl.
Requires a TTS system such as FestivalPart of the Olympus architecture. A successor to CMU Communicator. Galaxy Communicator
websitespoken, frame-based distributed, message-based, hub-and-spoke infrastructure optimized for constructing spoken dialogue systems MITRE
DARPAC (on Unix or Windows) A successor to the MIT Galaxy Architecture. CSLU Toolkit
websitea state-based speech interface prototyping environment OGI School of Science and Engineering
M. McTear
Ron Colepublications are from 1999. VoiceBrowse architecture enabling the dynamic production of dialogue driven by unstructured online (Internet) sources Basilica
website
articleevent-driven software architecture for creating conversational agents as a collection of reusable components. Carnegie Mellon University
Rohit Kumar
Carolyn P. RoseWIT
articlea toolkit for building spoken dialog systems, with an incremental understanding mechanism NTT Corporation
Mikio Nakano
Noboru Miyazaki
Norihito Yasuda
Akira Sugiyama
Jun-ichi Hirasawa
Kohji Dohsaka
Kiyoaki AikawaNPC Editor
website
articleNPC for answering questions ICT
Anton Leuski
David TraumWindows Part of the Virtual Humans Toolkit KomParse
website
articleNPC for natural conversation DFKI
Tina Kluwer
Peter Adolphs
Feiyu Xu
Hans Uszkoreit
Xiwen ChengTwinity virtual world TuTalk
websiteTutorial dialog Infrastructure for authoring and experimenting with natural language dialogue in tutoring systems and learning research University of Pittsburgh
Carnegie Mellon University
Pam Jordan
Carolyn RoseADAMACH
website
articlePOMDP Spoken dialog University of Trento
Sebastian Varges
Giuseppe Riccardi
Silvia QuarteroniUnder construction VXML
Voice XMLSpoken dialog multimodal dialog markup language developed initially by AT&T then administered by an industry consortium and finally a W3C specification Example primarily for telephony. SALT markup language multimodal dialog markup language Microsoft "has not reached the level of maturity of VoiceXML in the standards process". Ariadne Windows Quack.com - QXML Development Environment company bought by AOL References
- ^ Jufarsky & Martin (2009), Speech and language processing. Pearson International Edition, ISBN 978-0-13-504196-3, Chapter 24
- ^ Will, Thomas (2007). Creating a Dynamic Speech Dialogue. VDM Verlag Dr. Müller. ISBN 978-3836449908.
- ^ Lester, J.; Branting, K.; Mott, B. (2004), "Conversational Agents", The Practical Handbook of Internet Computing, Chapman & Hall, http://www.astutesolutions.com/downloads/conversational_agents_Lester_RealDialog.pdf
Further reading
- Will, Thomas (2007). Creating a Dynamic Speech Dialogue. VDM Verlag Dr. Müller. ISBN 978-3836449908.
- Dialogue Processing in Spoken Language Systems
- Voice User Interface Design
- Spoken Dialogue Technology: Towards the Conversational Interface
- Machine Conversations
- Dialog system at BookRags.
- Dialogue system links by Staffan Larsson.
- Machine learning approaches to building spoken dialogue systems: the CLASSiC project.
External links
- The Conversational Interface: Our Next Great Leap Forward, 2003, article on the CI by futurist John Smart.
Categories:- Multimodal interaction
- User interfaces
- Systems engineering
- The user speaks, and the input is converted to plain text by the system's input recognizer/decoder, which may include:
Wikimedia Foundation. 2010.