About this Course
With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into
their products and internal applications for a wide variety of use cases, including (but not limited to) text
generation, large-scale document analysis, and chatbot assistants.
The fastest way to begin leveraging LLMs for diverse tasks is by using modern prompt engineering
techniques. These techniques are also foundational for more advanced LLM-based methods such as
Retrieval-Augmented Generation (RAG) and Parameter-Ecient Fine-Tuning (PEFT). In this workshop, learners
will work with an NVIDIA language model NIM, powered by the open-source Llama-3.1 large language model,
alongside the popular LangChain library. The workshop will provide a foundational skill set for building a range
of LLM-based applications using prompt engineering.
Learning Objectives
By the end of the workshop, you will:
Understand how to apply iterative prompt engineering best practices to create LLM-based applications
for various language-related tasks.
Be procient in using LangChain to organize and compose LLM workows.
Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and
more.
Topics Covered
NVIDIA NIM
LangChain
Llama 3.1
Assessment Type
Certicate:
Hardware Requirements:
Course Details
Duration: 08:00
Price:
Level: Technical - Beginner
Subject: Generative AI/LLM
Language: English
Course Prerequisites:
This course is primarily intended for
intermediate level and above Python
developers with a solid understanding of
LLM fundamentals.
: Skills-based coding
projects challenge students’ ability to write
code for a variety of LLM-based applications.
Upon successful completion of
the workshop, participants will receive an
NVIDIA DLI certicate to recognize their
subject matter competency and support
professional career growth.
Desktop or laptop
computer capable of running the latest
version of Chrome or Firefox. Each
participant will be provided with dedicated
Course Outline
The below is a suggested timeline for the course. Please work with the instructor to nd the best timeline for
your session.
Course Introduction (30 minutes)
Orient to the main worshop topics, schedule and prerequisites.
Learn why prompt engineering is core to interacting with Large
Languange Models (LLMs).
Discuss how prompt engineering can be used to develop many
classes of LLM-based applications.
Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM
used in the workshop.
Introduction to Prompting (60
minutes)
Get familiar with the workshop environment.
Create and view responses from your rst prompts using the OpenAI
API, and LangChain.
Learn how to stream LLM responses, and send LLMs prompts in
batches, comparing dierences in performance.
Begin practicing the process of iterative prompt development.
Create and use your rst prompt templates.
Do a mini project where to perform a combination of analysis and
generative tasks on a batch of inputs.
Break (60 minutes)
LangChain Expression Language
(LCEL), Runnables, and Chains (75
minutes)
Learn about LangChain runnables, and the ability to compose them
into chains using LangChain Expression Language (LCEL).
Write custom functions and convert them into runnables that can be
included in LangChain chains.
Compose multiple LCEL chains into a single larger application chain.
Exploit opportunities for parallel work by composing parallel LCEL
chains.
Do a mini project where to perform a combination of analysis and
generative tasks on a batch of inputs using LCEL and parallel
execution.
Languages:
access to a fully congured, GPU-
accelerated server in the cloud.
English
Prefer learning from an instructor?
Request a private workshop or view our
public workshop schedule.
Prompting With Messages (60
minutes)
Learn about two of the core chat message types, human and AI
messages, and how to use them explictly in application code.
Provide chat models with instructive examples by way of a technique
called few-shot prompting.
Work explicitly with the system message, which will allow you to
dene an overarching persona and role for your chat models.
Use chain-of-thought prompting to augment your LLMs ability to
perform tasks requiring complex reasoning.
Manage messages to retain conversation history and enable chatbot
functionality.
Do a mini-project where you build a simple yet exible chatbot
application capable of assuming a variety of roles.
Break (15 minutes)
Structured Output (60 minutes)
Explore some basic methods for using LLMs to generate structured
data in batch for downstream use.
Generate structured output through a combination of Pydantic
classes and LangChain's `JsonOutputParser`.
Learn how to extract data and tag it as you specify out of long form
text.
Do a mini-project where you use structured data generation
techniques to perform data extraction and document tagging on an
unstructured text document.
Tool Use and Agents (75 minutes)
Create LLM-external functionality called tools, and make your LLM
aware of their availability for use.
Create an agent capable of reasoning about when tool use is
appropriate, and integrating the result of tool use into its responses.
Do a mini-project where you create an LLM agent capable of utilizing
external API calls to augment its responses with real-time data.
Assessment and Final Review (30
minutes)
Review key learnings and answer questions.
Earn a certicate of competency for the workshop.
Complete the workshop survey.
Get recommendations for the next steps to take in your learning
journey.

Preview text:

About this Course Course Details
With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into Duration: 08:00
their products and internal applications for a wide variety of use cases, including (but not limited to) text
generation, large-scale document analysis, and chatbot assistants. Price: Level: Technical - Beginner
The fastest way to begin leveraging LLMs for diverse tasks is by using modern prompt engineering
techniques. These techniques are also foundational for more advanced LLM-based methods such as Subject: Generative AI/LLM
Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). In this workshop, learners
wil work with an NVIDIA language model NIM, powered by the open-source Llama-3.1 large language model, Language: English
alongside the popular LangChain library. The workshop wil provide a foundational skil set for building a range Course Prerequisites:
of LLM-based applications using prompt engineering.
This course is primarily intended for
intermediate level and above Python
developers with a solid understanding of Learning Objectives LLM fundamentals.
Assessment Type: Skil s-based coding
By the end of the workshop, you wil :
projects chal enge students’ ability to write
Understand how to apply iterative prompt engineering best practices to create LLM-based applications
code for a variety of LLM-based applications.
for various language-related tasks.
Be proficient in using LangChain to organize and compose LLM workflows.
Certificate: Upon successful completion of
Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, andthe workshop, participants wil receive an more.
NVIDIA DLI certificate to recognize their
subject matter competency and support professional career growth. Topics Covered
Hardware Requirements: Desktop or laptop NVIDIA NIM
computer capable of running the latest LangChain
version of Chrome or Firefox. Each Llama 3.1
participant wil be provided with dedicated
access to a ful y configured, GPU- Course Outline
accelerated server in the cloud. Languages: English
The below is a suggested timeline for the course. Please work with the instructor to find the best timeline for your session.
Prefer learning from an instructor?
Request a private workshop or view our
Orient to the main worshop topics, schedule and prerequisites. public workshop schedule.
Learn why prompt engineering is core to interacting with Large Languange Models (LLMs).
Course Introduction (30 minutes)
Discuss how prompt engineering can be used to develop many
classes of LLM-based applications.
Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.
Get familiar with the workshop environment.
Create and view responses from your first prompts using the OpenAI API, and LangChain.
Learn how to stream LLM responses, and send LLMs prompts in Introduction to Prompting (60
batches, comparing differences in performance. minutes)
Begin practicing the process of iterative prompt development.
Create and use your first prompt templates.
Do a mini project where to perform a combination of analysis and
generative tasks on a batch of inputs. Break (60 minutes)
Learn about LangChain runnables, and the ability to compose them
into chains using LangChain Expression Language (LCEL).
Write custom functions and convert them into runnables that can be included in LangChain chains. LangChain Expression Language
Compose multiple LCEL chains into a single larger application chain.
(LCEL), Runnables, and Chains (75
Exploit opportunities for paral el work by composing paral el LCEL minutes) chains.
Do a mini project where to perform a combination of analysis and
generative tasks on a batch of inputs using LCEL and paral el execution.
Learn about two of the core chat message types, human and AI
messages, and how to use them explictly in application code.
Provide chat models with instructive examples by way of a technique cal ed few-shot prompting.
Work explicitly with the system message, which wil al ow you to Prompting With Messages (60
define an overarching persona and role for your chat models. minutes)
Use chain-of-thought prompting to augment your LLMs ability to
perform tasks requiring complex reasoning.
Manage messages to retain conversation history and enable chatbot functionality.
Do a mini-project where you build a simple yet flexible chatbot
application capable of assuming a variety of roles. Break (15 minutes)
Explore some basic methods for using LLMs to generate structured
data in batch for downstream use.
Generate structured output through a combination of Pydantic
classes and LangChain's `JsonOutputParser`. Structured Output (60 minutes)
Learn how to extract data and tag it as you specify out of long form text.
Do a mini-project where you use structured data generation
techniques to perform data extraction and document tagging on an unstructured text document.
Create LLM-external functionality cal ed tools, and make your LLM
aware of their availability for use.
Create an agent capable of reasoning about when tool use is
Tool Use and Agents (75 minutes)
appropriate, and integrating the result of tool use into its responses.
Do a mini-project where you create an LLM agent capable of utilizing
external API cal s to augment its responses with real-time data.
Review key learnings and answer questions.
Earn a certificate of competency for the workshop.
Assessment and Final Review (30 Complete the workshop survey. minutes)
Get recommendations for the next steps to take in your learning journey.