The rise of GenAI (generative AI) tools such as ChatGPT has transformed the research environment, yet most legal researchers remain untrained in the theory, mechanics and epistemic structure of such systems. The public itself was introduced to GenAI through Generative Pre-Trained Transformer tools such as ChatGPT and Claude. Although AI is a decades-old academic discipline, it is now rapidly expanding, and LLM-based AI tools (called Legal Research AI Tools, or LRATs hereinafter, such as Lexis+ AI) sit at AI’s cutting edge within legal research. These LRATs rely on non-legal theoretical informational concepts and technologies to function. Legal researchers often struggle to understand how AI-enabled tools function, which makes effective/reliable use of them more difficult. Without proper orientation, legal professionals risk using LRATs with misplaced confidence and insufficient clarity, the implications of which will be addressed in a future article. This article, written by Ryan Marcotte, Reference, Instruction, & Scholarship law librarian at DePaul University’s College of Law in Chicago, Illinois, defines and explains AI-assisted legal research (AIALR) as a third phase of research logic following the traditional book-based legal research (BLR) and computer-assisted legal research (CALR) phases. It also introduces a definition of AI tailored for legal research, outlines key conceptual structures underpinning LRATs, and explains how they interpret human input. From this grounding, this article offers two frameworks: (1) the Five Ps Research Plan and (2) the four prompt engineering methodologies of Retrieval Augmented Generation, Few-Shot Prompting, Chain-of-Thought/Chain-of-Logic, and Prompt Chaining. Together, these frameworks equip legal researchers with the understanding and skills to plan, shape, and evaluate their research interactions with LRATs in the age of GenAI.