Show simple item record

dc.contributor.authorFernández González, Daniel 
dc.date.accessioned2024-09-18T08:01:42Z
dc.date.available2024-09-18T08:01:42Z
dc.date.issued2024-08-22
dc.identifier.citationCognitive Computation, 1, 1-17 (2024)spa
dc.identifier.issn18669956
dc.identifier.issn18669964
dc.identifier.urihttp://hdl.handle.net/11093/7460
dc.description.abstractIntelligent voice assistants, such as Apple Siri and Amazon Alexa, are widely used nowadays. These task-oriented dialogue systems require a semantic parsing module in order to process user utterances and understand the action to be performed. This semantic parsing component was initially implemented by rule-based or statistical slot-filling approaches for processing simple queries; however, the appearance of more complex utterances demanded the application of shift-reduce parsers or sequence-to-sequence models. Although shift-reduce approaches were initially considered the most promising option, the emergence of sequence-to-sequence neural systems has propelled them to the forefront as the highest-performing method for this particular task. In this article, we advance the research on shift-reduce semantic parsing for task-oriented dialogue. We implement novel shift-reduce parsers that rely on Stack-Transformers. This framework allows to adequately model transition systems on the transformer neural architecture, notably boosting shift-reduce parsing performance. Furthermore, our approach goes beyond the conventional top-down algorithm: we incorporate alternative bottom-up and in-order transition systems derived from constituency parsing into the realm of task-oriented parsing. We extensively test our approach on multiple domains from the Facebook TOP benchmark, improving over existing shift-reduce parsers and state-of-the-art sequence-to-sequence models in both high-resource and low-resource settings. We also empirically prove that the in-order algorithm substantially outperforms the commonly used top-down strategy. Through the creation of innovative transition systems and harnessing the capabilities of a robust neural architecture, our study showcases the superiority of shift-reduce parsers over leading sequence-to-sequence methods on the main benchmark.en
dc.description.sponsorshipXunta de Galicia | Ref. ED431C 2020/11spa
dc.description.sponsorshipXunta de Galicia | Ref. ED431G 2019/01spa
dc.description.sponsorshipUniversidade de Vigo/CISUGspa
dc.description.sponsorshipEuropean Comission | Ref. 714150spa
dc.description.sponsorshipAgencia Estatal de Investigación | Ref. PID2020-113230RB-C21spa
dc.language.isoengspa
dc.publisherCognitive Computationspa
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-113230RB-C21/ES
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleShift-reduce task-oriented semantic parsing with stack-transformersen
dc.typearticlespa
dc.rights.accessRightsopenAccessspa
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/714150spa
dc.identifier.doi10.1007/s12559-024-10339-4
dc.identifier.editorhttps://link.springer.com/10.1007/s12559-024-10339-4spa
dc.publisher.departamentoInformáticaspa
dc.publisher.grupoinvestigacionCOmputational LEarningspa
dc.subject.unesco1203.04 Inteligencia Artificialspa
dc.subject.unesco3325.99 Otrasspa
dc.date.updated2024-09-10T10:59:35Z
dc.computerCitationpub_title=Cognitive Computation|volume=1|journal_number=|start_pag=1|end_pag=17spa


Files in this item

[PDF]

    Show simple item record

    Attribution 4.0 International
    Except where otherwise noted, this item's license is described as Attribution 4.0 International