My research interests broadly lie in the fields of Machine Learning, Natural Language Processing and Speech Processing. In the past, I have worked on time-series and sequence-to-sequence tasks. Recently, I have been working on reasearch promblems in prompt-based learning and in-context learning with pretrained large language models in zero-shot and few-shot settings as well as evaluation model-generated step-by-step reasoning. I am excited about problems involving (task) instruction-based learning, evaluation of consistency and reasoning abilities of large language models. I am also interested in building machine learning models that robust to distributional shifts and are not dependent on dataset biases or spurious correlations.

Publications and Patents [Google Scholar] [Semantic Scholar]

  • Soft Self-Consistency Improves Language Model Agents
    Han Wang*, Archiki Prasad*, Elias Stengel-Eskin*, and Mohit Bansal
    Arxiv Preprint
    [abstract] [code]

  • ReGAL: Refactoring Programs to Discover Generalizable Abstractions
    Elias Stengel-Eskin*, Archiki Prasad*, and Mohit Bansal
    Arxiv Preprint
    [abstract] [code]

  • ADaPT: As-Needed Decomposition and Planning with Language Models
    Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, and Tushar Khot
    Arxiv Preprint
    [abstract] [code] [project page]

  • Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
    Archiki Prasad, Elias Stengel-Eskin and Mohit Bansal
    ICLR 2024
    [abstract] [code]

  • ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness
    Archiki Prasad, Swarnadeep Saha, Xiang Zhou and Mohit Bansal
    EMNLP 2023
    [abstract] [code]

  • MeetingQA: Extractive Question-Answering on Meeting Transcripts
    Archiki Prasad, Trung Bui, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt and Mohit Bansal
    ACL 2023
    [abstract] [code + data] [project page]

  • GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
    Archiki Prasad, Peter Hase, Xiang Zhou and Mohit Bansal
    EACL 2023
    [abstract] [code]

  • The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding
    Archiki Prasad*, Mohammad Ali Rehan*, Shreya Pathak* and Preethi Jyothi
    Workshop on Multilingual Representation Learning (MRL) 2021, EMNLP 2021
    🏆 Best Paper Honorable Mention
    [abstract] [code] [slides]

  • An Investigation of End-to-End Models for Robust Speech Recognition
    Archiki Prasad, Preethi Jyothi and Rajbabu Velmurugan
    IEEE-ICASSP 2021
    [abstract] [code] [poster]

  • Decentralized Age-of-Information Bandits
    Archiki Prasad, Vishal Jain and Sharayu Moharir
    IEEE-WCNC 2021
    [abstract] [long-form with proofs]

  • How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems
    Archiki Prasad and Preethi Jyothi
    ACL 2020
    [abstract] [code] [talk]

  • Time Series Forecasting for Cold-Start Items by Learning from Related Items using Memory Networks
    Ayush Chauhan, Archiki Prasad, Parth Gupta, Amiredddy Prashanth Reddy and Shiv Kumar Saini
    The Web Conference (WWW) 2020

  • Key-Value Memory Networks for Predicting Time Series Metrics of Target Entities
    Shiv Kumar Saini, Ayush Chauhan, Parth Gupta, Archiki Prasad, Amiredddy Prashanth Reddy and Ritwick Chaudhary
    Patent filed at the US Patent and Trademarks Office 2020 | Adobe Inc.
    [summary] [application no. US16/868942]

Research Projects

Please have a look at my Curriculum Vitae for a comprehensive list of my projects.