Vishal Pallagani PhD Dissertation • University of South Carolina
PhD Dissertation • 2026

Generalized Planning Using Language Models and its Applications

A dissertation direction at the intersection of planning, language models, and robust decision-making, focused on generalization, controllability, and real-world impact.

Degree: Ph.D. • Computer Science
Institution: University of South Carolina
College: Molinaroli College of Engineering & Computing
Read abstract Research questions

Abstract

Planning is fundamental for intelligent systems, yet classical planners often struggle to scale and transfer across domains due to hand-engineered models, brittle search, and limited adaptability.

This dissertation investigates whether large language models (LLMs), powerful learners trained at scale, can be systematically leveraged to advance automated planning, while retaining the guarantees and structure that planning requires.

Contributions

  • Landscape & taxonomy: a synthesis of 128 papers categorizing how LLMs are used in planning, organizing techniques, objectives, and applications into a coherent taxonomy.
  • Plan generation with LMs: an empirical study of pretrained planning ability, gains from fine-tuning on planning data, and how compact models can be trained from scratch to better support plan generation.
  • Neuro-symbolic planners: architectures integrating LMs with symbolic planning components to improve robustness and generalization, addressing LM failure modes in constrained decision-making.
  • Real-world evaluation: studies in collaborative information-retrieval assistance and manufacturing replanning, demonstrating practical decision-making under constraints.

Overall, the dissertation advances understanding of how LLMs can support, improve, and generalize automated planning, outlining a path toward planners that combine learning with symbolic reasoning.

Research Questions

RQ1
Characterization How are language models being used for automated planning?
RQ2
Specialization How can language models be used for effective plan generation?
RQ2.1 How well do pretrained language models perform on plan generation tasks?
RQ2.2 Does fine-tuning on planning data improve the ability to generate valid plans?
RQ2.3 How can a compact language model be trained from scratch to better support plan generation?
RQ3
Integration How can language models and symbolic methods be combined to achieve robust plan generation via neuro-symbolic architectures?
RQ4
Application How do new generalized planners perform in applications?
RQ4.1 In collaborative assistants for information retrieval
RQ4.2 In manufacturing for replanning
Progress timeline
RQ1 • Characterization Taxonomy from literature synthesis.
done
RQ2 • Specialization Pretrained + fine-tuned + compact-from-scratch plan generation.
current
RQ3 • Integration Neuro-symbolic architectures for robustness.
done
RQ4 • Application IR assistants and manufacturing replanning.
ongoing
Theme
From “how LMs are used” → “how to make them plan reliably” → “how they behave in practice.”

Committee

Biplav Srivastava Major Professor
Amit Sheth Major Professor
Ramtin Zand Examination Chair
Lior Horesh Committee Member
Sarath Sreedharan Committee Member

Contact

Vishal Pallagani
PhD • Computer Science • University of South Carolina
Email: vishal.pallagani [at] gmail [dot] com
Links: Google Scholar · GitHub · LinkedIn