Research

My research interests broadly lie in the fields of Natural Language Processing and Machine Learning.My primary research goals are (1) developing effective ways of evaluating reasoning over natural language, and (2) leveraging as well as improving Large Language Models’ (LLMs) reasoning capabilities to perform complex tasks requiring compositional generalization, planning abilities and enhancing downstream performance. I seek to develop methods that enable LLMs to identify and rectify issues in their reasoning, as well as to enhance their understanding of the reasoning process. I also explore practical applications of LLM reasoning in domains such as planning and coding.

Publications and Patents [Google Scholar] [Semantic Scholar]

  • System-1.x: Learning to Balance Fast and Slow Planning with Language Models
    Swarnadeep Saha, Archiki Prasad, Justin Chih-Yao Chen, Peter Hase, Elias Stengel-Eskin, and Mohit Bansal
    Arxiv Preprint
    [abstract] [code]

  • Soft Self-Consistency Improves Language Model Agents
    Han Wang*, Archiki Prasad*, Elias Stengel-Eskin*, and Mohit Bansal
    ACL 2024
    [abstract] [code]

  • ReGAL: Refactoring Programs to Discover Generalizable Abstractions
    Elias Stengel-Eskin*, Archiki Prasad*, and Mohit Bansal
    ICML 2024
    [abstract] [code]

  • ADaPT: As-Needed Decomposition and Planning with Language Models
    Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, and Tushar Khot
    NAACL 2024 (Findings)
    [abstract] [code] [project page]

  • Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
    Archiki Prasad, Elias Stengel-Eskin and Mohit Bansal
    ICLR 2024
    [abstract] [code]

  • ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness
    Archiki Prasad, Swarnadeep Saha, Xiang Zhou and Mohit Bansal
    EMNLP 2023
    [abstract] [code]

  • MeetingQA: Extractive Question-Answering on Meeting Transcripts
    Archiki Prasad, Trung Bui, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt and Mohit Bansal
    ACL 2023
    [abstract] [code + data] [project page]

  • GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
    Archiki Prasad, Peter Hase, Xiang Zhou and Mohit Bansal
    EACL 2023
    [abstract] [code]

  • The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding
    Archiki Prasad*, Mohammad Ali Rehan*, Shreya Pathak* and Preethi Jyothi
    Workshop on Multilingual Representation Learning (MRL) 2021, EMNLP 2021
    🏆 Best Paper Honorable Mention
    [abstract] [code] [slides]

  • An Investigation of End-to-End Models for Robust Speech Recognition
    Archiki Prasad, Preethi Jyothi and Rajbabu Velmurugan
    IEEE-ICASSP 2021
    [abstract] [code] [poster]

  • Decentralized Age-of-Information Bandits
    Archiki Prasad, Vishal Jain and Sharayu Moharir
    IEEE-WCNC 2021
    [abstract] [long-form with proofs]

  • How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems
    Archiki Prasad and Preethi Jyothi
    ACL 2020
    [abstract] [code] [talk]

  • Time Series Forecasting for Cold-Start Items by Learning from Related Items using Memory Networks
    Ayush Chauhan, Archiki Prasad, Parth Gupta, Amiredddy Prashanth Reddy and Shiv Kumar Saini
    The Web Conference (WWW) 2020
    [abstract]

  • Key-Value Memory Networks for Predicting Time Series Metrics of Target Entities
    Shiv Kumar Saini, Ayush Chauhan, Parth Gupta, Archiki Prasad, Amiredddy Prashanth Reddy and Ritwick Chaudhary
    Patent filed at the US Patent and Trademarks Office 2020 | Adobe Inc.
    [summary] [application no. US16/868942]

Research Projects

Please have a look at my Curriculum Vitae for a comprehensive list of my projects.