Prompt Engineering 101: Simplified for Students

Last update on:
Prompt engineering 101 header image in 2d flat design by GradSimple.

Time to Read: 7 minutes

Dive into the world of Prompt Engineering, a skill critical for leveraging artificial intelligence in today’s digital landscape. This post strips down the complexities, offering a simple to understand guide for you to engage with large language models (LLMs) such as ChatGPT.

Whether it’s crafting your first prompt or refining your approach, we cover the basics and essentials of prompt engineering. Let’s get started!


  • Harness AI’s Potential: Prompt engineering is the art of crafting specific instructions to guide AI models like ChatGPT, enabling them to generate precise and relevant responses or creative outputs.
  • Techniques Tailored to Goals: Utilize zero-shot learning for general tasks, few-shot learning for tasks with some examples, and chain-of-thought prompting for complex problem-solving.
  • Principles for Precision: Effective prompts require clear objectives, contextual relevance, conciseness, and sometimes creativity, ensuring AI’s responses align with your expectations.

What is Prompt Engineering?

Prompt engineering is the process of designing and refining inputs or questions to guide artificial intelligence (AI), particularly Large Language Models (LLMs), to generate specific, desired outputs.

In simple terms, it’s like giving a very smart robot a specific set of instructions to make sure it understands exactly what you want it to do.


These instructions can range from asking it to write an essay, solve a math problem, or even create a piece of art. By carefully crafting these instructions or ‘prompts,’ you can guide the robot to not only understand the task but also to execute it in a way that meets your specific needs or expectations.

What Are Large Language Models?

LLMs, such as OpenAI’s ChatGPT, are AI systems trained on vast amounts of text data. They can generate text, answer questions, and even write code by predicting the next word in a sequence based on the input they receive.

PRO TIP: LLMs like ChatGPT do not “think” or engage in critical thinking as humans do. Their responses are predictions made by analyzing the structure and content of the data they were trained on.

Importance of LLMs in AI

LLMs are the backbone for chatbots, writing assistants, and more. Their ability to understand and produce human-like text opens up new possibilities across various fields, from education to tech support.

Understanding Prompt Engineering

The way you ask—the words you choose, how detailed you are, and the context you provide—makes all the difference in how well a computer responds. Think of it as guiding a robot to think more like a human, produce better results, by carefully choosing your questions and instructions.

Guiding the Conversation

At its core, Prompt Engineering is about crafting the conversation between humans and machines. The right prompt can turn a vague response into a precise answer or a creative masterpiece. It’s not just about what you ask but how you ask it.

Strategic Communication

Effective Prompt Engineering requires strategic communication. Knowing how to phrase prompts means understanding the model’s training, its strengths, and its limitations. This insight allows for more targeted questions and, consequently, more useful answers.

Prompt Engineering Techniques

In the field of prompt engineering, you must adapt the way you give prompts, depending on what you’re trying to accomplish, the information you have, and relevant data that is available. The following three techniques effectively cover a broad spectrum of applications and scenarios you might encounter while working with language models like ChatGPT:

Zero-shot Learning

This refers to the model’s ability to perform tasks it wasn’t specifically trained on, using just your prompt for guidance. It’s like asking the AI to solve a problem or answer a question without providing any examples.

Zero-shot Learning Example:

Scenario: You’re part of a debate club and need to quickly understand a stance on a new policy proposal without having prior detailed research on it.

Prompt: Analyze the proposed campus recycling program and determine if it would effectively reduce waste based on its outlined strategies: [Insert brief description of the program].

Why This Works: Here, the AI uses its generalized understanding of environmental policies and recycling programs to provide an analysis. Even without specific training on the new policy, it applies its vast knowledge to offer insights, helping you prepare for the debate.

Few-shot learning

Involves providing the model with a few examples to help it understand the task at hand. It’s similar to giving the AI a mini-tutorial before asking it to complete a similar task.

Scenario: You’re writing an essay on Shakespeare’s use of iambic pentameter and need examples of its use in modern literature for comparison.

Prompt: Here are two examples of iambic pentameter in Shakespeare’s works: [Example 1], [Example 2]. Identify and explain the use of iambic pentameter in this contemporary poem: [Insert poem snippet].

Why This Works: By providing a few examples from Shakespeare, you’re teaching the AI the pattern to look for. The AI then uses this context to analyze the modern poem, helping you find and explain the usage of iambic pentameter in your essay.

Chain-of-Thought Prompting

A technique where prompts are designed to encourage the model to “think aloud” or follow a series of logical steps before arriving at an answer. This approach can significantly improve the model’s performance on complex questions.

Scenario: You’re working on a statistics homework problem involving probability and need to break down the steps to solve it.

Prompt: To calculate the probability of drawing a red card or a queen from a standard deck of cards, first calculate the probability of drawing a red card. Then, calculate the probability of drawing a queen. Finally, adjust for the overlap between red queens and sum these probabilities.

Why This Works: This prompt guides the AI through the problem-solving process step by step, similar to how you might solve it on paper. It helps you understand the logical progression for solving probability questions, making complex statistics homework more approachable.

Principles of Effective Prompt Design

Crafting effective prompts is an art that plays an important role in leveraging the capabilities of Large Language Models (LLMs) like ChatGPT. Here, we explore the foundational principles that guide the creation of prompts designed to elicit accurate, relevant, and insightful responses from AI.

Clear Objectives

Every prompt should have a clear objective. This clarity guides the AI in understanding exactly what is expected, whether it’s generating text, answering a question, or creating code. A well-defined prompt eliminates ambiguity, allowing the model to focus on delivering precise outputs.

Contextual Relevance

Incorporating context within prompts enhances the model’s understanding and relevance of the response. By providing background information or specifying the scope, prompts can lead to more accurate and contextually appropriate outputs. This principle is critical for tasks requiring nuanced understanding or domain-specific knowledge.

Conciseness and Precision

Conciseness and precision in prompt design ensure the model’s responses are directly relevant and to the point. Overly wordy prompts can dilute the focus, while too sparse prompts may not provide enough direction, leading to varied interpretations by the AI.

Creativity and Flexibility

While clarity and precision are key, incorporating a level of creativity and flexibility in prompt design can inspire more dynamic and innovative responses from LLMs. This balance encourages the model to explore a wider range of solutions or ideas, especially in creative writing or problem-solving tasks.

Iterative Testing

Effective prompt design is rarely achieved on the first try. Iterative testing and refinement based on the model’s responses enable the prompt engineer to fine-tune the language, structure, and detail level to optimize outcomes. This process is essential for understanding how different models react to various prompt styles and structures.

Feedback Integration

Integrating feedback from the model’s responses into future prompts is a critical principle for continuous improvement. This involves analyzing the quality, relevance, and accuracy of AI-generated content and adjusting prompts accordingly to enhance future interactions.

The Bottom Line

In our exploration of Prompt Engineering and Large Language Models (LLMs), we’ve uncovered the essentials of communicating with AI, from foundational concepts to the art of prompt design. Through understanding LLMs and prompt engineering principles, you’re better equipped to explore the AI landscape.

For more AI insights, tips, and job search guidance, subscribe to GradSimple. Be the first to receive our latest resources and updates, tailored to help you, the college student transition into the professional world with ease.

Share This Post
Photo of author
The human who runs the ship. Occasional writer, occasional web developer. Yes, this is the guy who hired a raccoon (if you know, you know).

Latest Posts

Bite-Size Stories Of Life After College.

We show what life is like on the other side. One year, three years, ten years out: our interviews share what really goes on after you're handed your diploma. The best part? It’s all free.