An autonomous debate system | Nature

  • 1.

    Lawrence, J. & Reed, C. Argument mining: an investigation. Comput. Linguist 45, 765-818 (2019).

    Article Google Scholar

  • 2.

    Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language comprehension. Preprint at https://arxiv.org/abs/1810.04805 (2018).

  • 3.

    Peters, M. et al. Deeply contextualized word representations. In Proc. 2018 Conf. North Am. Ch. Assoc. for computational linguistics: human language technologies Full. 1, 2227–2237 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/N18–1202

  • 4.

    Radford, A. et al. Language models are unaccompanied multi-task learners. OpenAI blog 1, http://www.persagen.com/files/misc/radford2019language.pdf (2019).

  • 5.

    Socher, R. et al. Recursive deep models for semantic composition across a sentiment tree bank. In Proc. Empirical Methods in Natural Language Processing (EMNLP) 1631–1642 (Association for Computational Linguistics, 2013).

  • 6.

    Yang, Z. et al. XLNet: Generalized Autoregressive Pre-Training for Language Comprehension. In Adv. in neural information processing systems (NIPS) 5753-5763 (Curran Associates, 2019).

  • 7.

    Cho, K., van Merriënboer, B., Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: encoder-decoder approaches. In Proc. 8th Worksh. about syntax, semantics and structure in statistical translation 103-111 (Association for Computational Linguistics, 2014).

  • 8.

    Gambhir, M. & Gupta, V. Recent Auto Text Digest Techniques: An Investigation. Artif. Intell. Rev 47, 1-66 (2017).

    Article Google Scholar

  • 9.

    Young, S., Gašić, M., Thomson, B. & Williams, J. POMDP-based statistical spoken dialogue systems: a review. Proc. IEEE 101, 1160-1179 (2013).

    Article Google Scholar

  • 10.

    Gurevych, I., Hovy, EH, Slonim, N. & Stein, B. Debating Technologies (Dagstuhl Seminar 15512) Dagstuhl Report 5 (2016).

  • 11.

    Levy, R., Bilu, Y., Hershcovich, D., Aharoni, E. & Slonim, N. Context-dependent claim detection. In Proc. COLING 2014, the 25th Int. Conf. on Computational Linguistics: Technical Papers 1489–1500 (Dublin City University and Association for Computational Linguistics, 2014); https://www.aclweb.org/anthology/C14–1141

  • 12.

    Rinott, R. et al. Show me your evidence – an automatic method for context-sensitive evidence search. In Proc. 2015 Conf. on empirical methods in natural language processing 440–450 (Association for Computational Linguistics, 2015); https://www.aclweb.org/anthology/D15-1050

  • 13.

    Shnayderman, I. et al. Rapid end-to-end wikification. Preprint at https://arxiv.org/abs/1908.06785 (2019).

  • 14.

    Borthwick, A. A maximum entropy approach to named entity recognitionDissertation, New York Univ. https://cs.nyu.edu/media/publications/borthwick_andrew.pdf (1999).

  • 15.

    Finkel, JR, Grenager, T. & Manning, C. Incorporation of remote information into information extraction systems by Gibbs sampling. In Proc. 43rd Ann. Meet. Assoc. for computational linguistics 363-370 (Association for Computational Linguistics, 2005).

  • 16.

    Levy, R., Bogin, B., Gretz, S., Aharonov, R. & Slonim, N. Towards a weakly supervised argumentative content search engine. In Proc. 27th Int. Conf. on Computational Linguistics (COLING 2018) 2066–2081, https://www.aclweb.org/anthology/C18-1176.pdf (International Committee on Computational Linguistics, 2018).

  • 17.

    Ein-Dor, L. et al. Corpus wide argument mining – a working solution. In Proc. Thirty-fourth AAAI Conf. about artificial intelligence 7683-7691 (AAAI Press, 2020).

  • 18.

    Levy, R. et al. Uncontrolled corpus-wide claim detection. In Proc. 4th Worksh. on Argument Mining 79-84 (Association for Computational Linguistics, 2017); https://www.aclweb.org/anthology/W17–5110

  • 19.

    Shnarch, E. et al. Will it mix? Mixing weak and strongly labeled data in a neural network for argumentation mining. In Proc. 56th Ann. Meet. Assoc. for computational linguistics Full. 2,599-605 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/P18–2095

  • 20.

    Gleize, M. et al. Are you convinced? Choosing the more compelling evidence with a Siamese network. In Proc. 57th Conf. Assoc. for Computational Linguistic, 967-976 (Association for Computational Linguistics, 2019).

  • 21.

    Bar-Haim, R., Bhattacharya, I., Dinuzzo, F., Saha, A. & Slonim, N. Standings of Contextual Claims. In Proc. 15th Conf. EUR. Ch. Assoc. for computational linguistics Full. 1, 251–261 (Association for Computational Linguistics, 2017).

  • 22.

    Bar-Haim, R., Edelstein, L., Jochim, C. & Slonim, N. Improvement of claim position classification with extension of lexical knowledge and use of context. In Proc. 4th Worksh. on Argument Mining 32-38 (Association for Computational Linguistics, 2017).

  • 23.

    Bar-Haim, R. et al. From Surrogacy to Adoption; from bitcoin to cryptocurrency: expanding the debate topic. In Proc. 57th Conf. Assoc. for computational linguistics 977-990 (Association for Computational Linguistics, 2019).

  • 24.

    Bilu, Y. et al. First Principles Argument Invention. In Proc. 57th Ann. Meet. Assoc. for computational linguistics 1013-1026 (Association for Computational Linguistics, 2019).

  • 25.

    Ein-Dor, L. et al. Semantic affinity of Wikipedia concepts – benchmark data and a working solution. In Proc. Eleventh Int. Conf. on language resources and evaluation (LREC 2018) 2571-2575 (Springer, 2018).

  • 26.

    Pahuja, V. et al. Co-learning of correlated sequence labeling tasks using bidirectional recurring neural networks. In Proc. Interspeech 548-552 (International Speech Communication Association, 2017).

  • 27.

    Mirkin, S. et al. Listening to understanding over argumentative content. In Proc. 2018 Conf. on empirical methods in natural language processing 719-724 (Association for Computational Linguistics, 2018).

  • 28.

    Lavee, T. et al .; Listening to claims: listening comprehensively with corpus-wide claim mining. In ArgMining Worksh58-66 (Association for Computational Linguistics, 2019).

  • 29.

    Orbach, M. et al .; A dataset of general purpose rebuttal. In Proc. 2019 Conf. on empirical methods in natural language processing 5595-5605 (Association for Computational Linguistics, 2019).

  • 30.

    Slonim, N., Atwal, GS, Tkačik, G. & Bialek, W. Information-based clustering. Proc. Natl Acad. Sci. USA 102, 18297-18302 (2005).

    ADS MathSciNet CAS article Google Scholar

  • 31.

    Ein Dor, L. et al. Metrics for thematic similarity learning from article sections using triplet networks. In Proc. 56th Ann. Meet. Assoc. for computational linguistics Full. 2, 49-54 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/P18–2009

  • 32.

    Shechtman, S. & Mordechay, M. Emphatic prediction of speech progression with deep LSTM networks. In 2018 IEEE Int. Conf. about acoustics, speech and signal processing (ICASSP) 5119-5123 (IEEE, 2018).

  • 33.

    Mass, Y. et al. Word Highlighting for Expressive Text-to-Speech. In Interspeech 2868–2872 (International Speech Communication Association, 2018).

  • 34.

    Feigenblat, G., Roitman, H., Boni, O. & Konopnicki, D. Uncontrolled query-oriented summary of multiple documents using the cross-entropy method. In Proc. 40th Int. ACM SIGIR Conf. on research and development in information retrieval 961-964 (Association for Computing Machinery, 2017).

  • 35.

    Daxenberger, J., Schiller, B., Stahlhut, C., Kaiser, E. & Gurevych, I. Argument classification and clustering in a generalized search scenario. Database spectrum 20, 115-121 (2020).

  • 36.

    Gretz, S. et al. A large-scale data set for ranking argument quality: construction and analysis. In Thirty-fourth AAAI Conf. about artificial intelligence 7805-7813 (AAAI Press, 2020); https://aaai.org/ojs/index.php/AAAI/article/view/6285

  • 37.

    Goodfellow, I., Bengio, Y. & Courville, A. Deep learning (MIT Press, 2016).

  • 38.

    Samuel, AL Some studies in machine learning using checkers. IBM J. Res. develop 3210-229 (1959).

    MathSciNet article Google Scholar

  • 39.

    Tesauro, G. TD-Gammon, a self-learning backgammon program, achieves master-level play. Neural computer 6, 215-219 (1994).

    Article Google Scholar

  • 40.

    Campbell, M., Hoane, AJ, Jr & Hsu, F.-h. Deep blue. Artif. Intell 134, 57-83 (2002).

    Article Google Scholar

  • 41.

    Ferrucci, DA Introduction to “This is Watson”. IBM J. Res. The V 56, 235–249 (2012).

    Article Google Scholar

  • 42.

    Silver, D. et al. A common reinforcement learning algorithm that masters chess, shogi, and self-play. Science 362, 1140-1144 (2018).

    ADS MathSciNet CAS article Google Scholar

  • 43.

    Coulom, R. Efficient selectivity and backup operators when searching for trees in Monte Carlo. In 5th Int. Conf. on computers and games inria-0011699 (Springer, 2006).

  • 44.

    Vinyals, O. et al. Grandmaster level in Starcraft II using multi-agent reinforcement learning. Nature 575, 350-354 (2019).

    ADS CAS article Google Scholar

  • Source