Programming Large Language Models With Azure Open Ai Conversational Programming And Prompt Engineering With Llms Developer Reference 1 Converted Esposito download
Programming Large Language Models With Azure Open Ai Conversational Programming And Prompt Engineering With Llms Developer Reference 1 Converted Esposito download
https://ebookbell.com/product/programming-large-language-models-with-
azure-open-ai-conversational-programming-and-prompt-engineering-with-
llms-francesco-esposito-56144398
https://ebookbell.com/product/matlab-programming-advanced-data-
analysis-visualisation-and-largescale-applications-for-research-and-
development-mastering-programming-languages-series-edet-63151948
https://ebookbell.com/product/programming-in-the-large-with-design-
patterns-burris-eddie-10786708
https://ebookbell.com/product/bonus-algorithm-for-large-scale-
stochastic-nonlinear-programming-problems-urmila-diwekar-5034436
Cohesive Subgraph Computation Over Large Sparse Graphs Algorithms Data
Structures And Programming Techniques 1st Ed Lijun Chang
https://ebookbell.com/product/cohesive-subgraph-computation-over-
large-sparse-graphs-algorithms-data-structures-and-programming-
techniques-1st-ed-lijun-chang-7321324
https://ebookbell.com/product/programming-101-learn-to-code-using-the-
processing-programming-language-2nd-edition-2nd-jeanine-meyer-46238180
Programming 101 The How And Why Of Programming Revealed Using The
Processing Programming Language Jeanine Meyer
https://ebookbell.com/product/programming-101-the-how-and-why-of-
programming-revealed-using-the-processing-programming-language-
jeanine-meyer-46318424
https://ebookbell.com/product/programming-and-gui-fundamentals-tcltk-
for-electronic-design-automation-suman-lata-tripathi-46318712
Programming Large Language
Models with Azure Open AI:
Conversational programming and
prompt engineering with LLMs
Francesco Esposito
Programming Large Language Models with Azure Open AI:
Conversational programming and prompt engineering with
LLMs
Published with the authorization of Microsoft Corporation by: Pearson
Education, Inc.
Trademarks
Microsoft and the trademarks listed at http://www.microsoft.com on the
“Trademarks” webpage are trademarks of the Microsoft group of companies.
All other marks are property of their respective owners.
Special Sales
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs;
and content particular to your business, training goals, marketing focus, or
branding interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact
[email protected].
Editor-in-Chief
Brett Bartow
Executive Editor
Loretta Yates
Associate Editor
Shourav Bose
Development Editor
Kate Shoup
Managing Editor
Sandra Schroeder
Copy Editor
Dan Foster
Indexer
Timothy Wright
Proofreader
Donna E. Mulder
Technical Editor
Dino Esposito
Editorial Assistant
Cindy Teeters
Cover Designer
Twist Creative, Seattle
Compositor
codeMantra
Graphics
codeMantra
Figure Credits
Figure 4.1: LangChain, Inc
Figures 7.1, 7.2, 7.4: Snowflake, Inc
Figure 8.2: SmartBear Software
Figure 8.3: Postman, Inc
Dedication
A I.
Perché non dedicarti un libro sarebbe stato un sacrilegio.
Contents at a Glance
Introduction
Index
Contents
Acknowledgments
Introduction
Chapter 8 Conversational UI
Overview
Scope
Tech stack
The project
Minimal API setup
OpenAPI
LLM integration
Possible extensions
Summary
Index
Acknowledgments
In the spring of 2023, when I told my dad how cool Azure OpenAI was
becoming, his reply was kind of a shock: “Why don’t you write a book about
it?” He said it so naturally that it hit me as if he really thought I could do it.
In fact, he added, “Are you up for it?” Then there was no need to say more.
Loretta Yates at Microsoft Press enthusiastically accepted my proposal, and
the story of this book began in June 2023.
AI has been a hot topic for the better part of a decade, but the emergence
of new-generation large language models (LLMs) has propelled it into the
mainstream. The increasing number of people using them translates to more
ideas, more opportunities, and new developments. And this makes all the
difference.
Hence, the book you hold in your hands can’t be the ultimate and
definitive guide to AI and LLMs because the speed at which AI and LLMs
evolve is impressive and because—by design—every book is an act of
approximation, a snapshot of knowledge taken at a specific moment in time.
Approximation inevitably leads to some form of dissatisfaction, and
dissatisfaction leads us to take on new challenges. In this regard, I wish for
myself decades of dissatisfaction. And a few more years of being on the stage
presenting books written for a prestigious publisher—it does wonders for my
ego.
First, I feel somewhat indebted to all my first dates since May because
they had to endure monologues lasting at least 30 minutes on LLMs and
some weird new approach to transformers.
True thanks are a private matter, but publicly I want to thank Martina first,
who cowrote the appendix with me and always knows what to say to make
me better. My gratitude to her is keeping a promise she knows. Thank you,
Martina, for being an extraordinary human being.
To Gianfranco, who taught me the importance of discussing and
expressing, even loudly, when something doesn’t please us, and taught me to
always ask, because the worst thing that can happen is hearing a no. Every
time I engage in a discussion, I will think of you.
I also want to thank Matteo, Luciano, Gabriele, Filippo, Daniele,
Riccardo, Marco, Jacopo, Simone, Francesco, and Alessia, who worked with
me and supported me during my (hopefully not too frequent) crises. I also
have warm thoughts for Alessandro, Antonino, Sara, Andrea, and Cristian
who tolerated me whenever we weren’t like 25-year-old youngsters because I
had to study and work on this book.
To Mom and Michela, who put up with me before the book and probably
will continue after. To my grandmas. To Giorgio, Gaetano, Vito, and Roberto
for helping me to grow every day. To Elio, who taught me how to dress and
see myself in more colors.
As for my dad, Dino, he never stops teaching me new things—for
example, how to get paid for doing things you would just love to do, like
being the technical editor of this book. Thank you, both as a father and as an
editor. You bring to my mind a song you well know: “Figlio, figlio, figlio.”
Beyond Loretta, if this book came to life, it was also because of the hard
work of Shourav, Kate, and Dan. Thank you for your patience and for
trusting me so much.
This book is my best until the next one!
Introduction
This is my third book on artificial intelligence (AI), and the first I wrote on
my own, without the collaboration of a coauthor. The sequence in which my
three books have been published reflects my own learning path, motivated by
a genuine thirst to understand AI for far more than mere business
considerations. The first book, published in 2020, introduced the
mathematical concepts behind machine learning (ML) that make it possible to
classify data and make timely predictions. The second book, which focused
on the Microsoft ML.NET framework, was about concrete applications—in
other words, how to make fancy algorithms work effectively on amounts of
data hiding their complexity behind the charts and tables of a familiar web
front end.
Then came ChatGPT.
The technology behind astonishing applications like ChatGPT is called a
large language model (LLM), and LLMs are the subject of this third book.
LLMs add a crucial capability to AI: the ability to generate content in
addition to classifying and predicting. LLMs represent a paradigm shift,
raising the bar of communication between humans and computers and
opening the floodgates to new applications that for decades we could only
dream of.
And for decades, we did dream of these applications. Literature and
movies presented various supercomputers capable of crunching any sort of
data to produce human-intelligible results. An extremely popular example
was HAL 9000—the computer that governed the spaceship Discovery in the
movie 2001: A Space Odyssey (1968). Another famous one was JARVIS
(Just A Rather Very Intelligent System), the computer that served Tony
Stark’s home assistant in Iron Man and other movies in the Marvel Comics
universe.
Often, all that the human characters in such books and movies do is
simply “load data into the machine,” whether in the form of paper
documents, digital files, or media content. Next, the machine autonomously
figures out the content, learns from it, and communicates back to humans
using natural language. But of course, those supercomputers were conceived
by authors; they were only science fiction. Today, with LLMs, it is possible
to devise and build concrete applications that not only make human–
computer interaction smooth and natural, but also turn the old dream of
simply “loading data into the machine” into a dazzling reality.
This book shows you how to build software applications using the same
type of engine that fuels ChatGPT to autonomously communicate with users
and orchestrate business tasks driven by plain textual prompts. No more, no
less—and as easy and striking as it sounds!
To fully grasp the value of a programming book on LLMs, there are a couple
of prerequisites, including proficiency in foundational programming concepts
and a familiarity with ML fundamentals. Beyond these, a working knowledge
of relevant programming languages and frameworks, such as Python and
possibly ASP.NET Core, is helpful, as is an appreciation for the significance
of classic natural language processing in the context of business domains.
Overall, a blend of programming expertise, ML awareness, and linguistic
understanding is recommended for a comprehensive grasp of the book’s
content.
This book might not be for you if you’re just seeking a reference book to find
out in detail how to use a particular pattern or framework. Although the book
discusses advanced aspects of popular frameworks (for example, LangChain
and Semantic Kernel) and APIs (such as OpenAI and Azure OpenAI), it does
not qualify as a programming reference on any of these. The focus of the
book is on using LLMs to build useful applications in the business domains
where LLMs really fit well.
Stay in touch
Let’s keep the conversation going! We’re on X / Twitter:
http://twitter.com/MicrosoftPress.
Chapter 1
Luring someone into reading a book is never a small feat. If it’s a novel, you
must convince them that it’s a beautiful story, and if it’s a technical book,
you must assure them that they’ll learn something. In this case, we’ll try to
learn something.
Over the past two years, generative AI has become a prominent buzzword.
It refers to a field of artificial intelligence (AI) focused on creating systems
that can generate new, original content autonomously. Large language
models (LLMs) like GPT-3 and GPT-4 are notable examples of generative
AI, capable of producing human-like text based on given input.
The rapid adoption of LLMs is leading to a paradigm shift in
programming. This chapter discusses this shift, the reasons for it, and its
prospects. Its prospects include conversational programming, in which you
explain with words—rather than with code—what you want to achieve. This
type of programming will likely become very prevalent in the future.
No promises, though. As you’ll soon see, explaining with words what you
want to achieve is often as difficult as writing code.
This chapter covers topics that didn’t find a place elsewhere in this book.
It’s not necessary to read every section or follow a strict order. Take and read
what you find necessary or interesting. I expect you will come back to read
certain parts of this chapter after you finish the last one.
LLMs at a glance
History of LLMs
The evolution of LLMs intersects with both the history of conventional AI
(often referred to as predictive AI) and the domain of natural language
processing (NLP). NLP encompasses natural language understanding (NLU),
which attempts to reduce human speech into a structured ontology, and
natural language generation (NLG), which aims to produce text that is
understandable by humans.
LLMs are a subtype of generative AI focused on producing text based on
some kind of input, usually in the form of written text (referred to as a
prompt) but now expanding to multimodal inputs, including images, video,
and audio. At a glance, most LLMs can be seen as a very advanced form of
autocomplete, as they generate the next word. Although they specifically
generate text, LLMs do so in a manner that simulates human reasoning,
enabling them to perform a variety of intricate tasks. These tasks include
sentiment analysis, summarization, translation, entity and intent recognition,
structured information extraction, document generation, and so on.
LLMs represent a natural extension of the age-old human aspiration to
construct automatons (ancestors to contemporary robots) and imbue them
with a degree of reasoning and language. They can be seen as a brain for such
automatons, able to respond to an external input.
AI beginnings
Modern software—and AI as a vibrant part of it—represents the culmination
of an embryonic vision that has traversed the minds of great thinkers since
the 17th century. Various mathematicians, philosophers, and scientists, in
diverse ways and at varying levels of abstraction, envisioned a universal
language capable of mechanizing the acquisition and sharing of knowledge.
Gottfried Leibniz (1646–1716), in particular, contemplated the idea that at
least a portion of human reasoning could be mechanized.
The modern conceptualization of intelligent machinery took shape in the
mid-20th century, courtesy of renowned mathematicians Alan Turing and
Alonzo Church. Turing’s exploration of “intelligent machinery” in 1947,
coupled with his groundbreaking 1950 paper, “Computing Machinery and
Intelligence,” laid the cornerstone for the Turing test—a pivotal concept in
AI. This test challenged machines to exhibit human behavior
(indistinguishable by a human judge), ushering in the era of AI as a scientific
discipline.
Note
Considering recent advancements, a reevaluation of the original
Turing test may be warranted to incorporate a more precise
definition of human and rational behavior.
NLP
NLP is an interdisciplinary field within AI that aims to bridge the interaction
between computers and human language. While historically rooted in
linguistic approaches, distinguishing itself from the contemporary sense of
AI, NLP has perennially been a branch of AI in a broader sense. In fact, the
overarching goal has consistently been to artificially replicate an expression
of human intelligence—specifically, language.
The primary goal of NLP is to enable machines to understand, interpret,
and generate human-like language in a way that is both meaningful and
contextually relevant. This interdisciplinary field draws from linguistics,
computer science, and cognitive psychology to develop algorithms and
models that facilitate seamless interaction between humans and machines
through natural language.
The history of NLP spans several decades, evolving from rule-based
systems in the early stages to contemporary deep-learning approaches,
marking significant strides in the understanding and processing of human
language by computers.
Originating in the 1950s, early efforts, such as the Georgetown-IBM
experiment in 1954, aimed at machine translation from Russian to English,
laying the foundation for NLP. However, these initial endeavors were
primarily linguistic in nature. Subsequent decades witnessed the influence of
Chomskyan linguistics, shaping the field’s focus on syntactic and
grammatical structures.
The 1980s brought a shift toward statistical methods, like n-grams, using
co-occurrence frequencies of words to make predictions. An example was
IBM’s Candide system for speech recognition. However, rule-based
approaches struggled with the complexity of natural language. The 1990s saw
a resurgence of statistical approaches and the advent of machine learning
(ML) techniques such as hidden Markov models (HMMs) and statistical
language models. The introduction of the Penn Treebank, a 7-million word
dataset of part-of-speech tagged text, and statistical machine translation
systems marked significant milestones during this period.
In the 2000s, the rise of data-driven approaches and the availability of
extensive textual data on the internet rejuvenated the field. Probabilistic
models, including maximum-entropy models and conditional random fields,
gained prominence. Begun in the 1980s but finalized years later, the
development of WordNet, a semantical-lexical database of English (with its
groups of synonyms, or synonym set, and their relations), contributed to a
deeper understanding of word semantics.
The landscape transformed in the 2010s with the emergence of deep
learning made possible by a new generation of graphics processing units
(GPUs) and increased computing power. Neural network architectures—
particularly transformers like Bidirectional Encoder Representations from
Transformers (BERT) and Generative Pretrained Transformer (GPT)—
revolutionized NLP by capturing intricate language patterns and contextual
information. The focus shifted to data-driven and pretrained language
models, allowing for fine-tuning of specific tasks.
LLMs
An LLM, exemplified by OpenAI’s GPT series, is a generative AI system
built on advanced deep-learning architectures like the transformer (more on
this in the appendix).
These models operate on the principle of unsupervised and self-supervised
learning, training on vast text corpora to comprehend and generate coherent
and contextually relevant text. They output sequences of text (that can be in
the form of proper text but also can be protein structures, code, SVG, JSON,
XML, and so on), demonstrating a remarkable ability to continue and expand
on given prompts in a manner that emulates human language.
The architecture of these models, particularly the transformer architecture,
enables them to capture long-range dependencies and intricate patterns in
data. The concept of word embeddings, a crucial precursor, represents words
as continuous vectors (Mikolov et al. in 2013 through Word2Vec),
contributing to the model’s understanding of semantic relationships between
words. Word embeddings is the first “layer” of an LLM.
The generative nature of the latest models enables them to be versatile in
output, allowing for tasks such as text completion, summarization, and
creative text generation. Users can prompt the model with various queries or
partial sentences, and the model autonomously generates coherent and
contextually relevant completions, demonstrating its ability to understand and
mimic human-like language patterns.
The journey began with the introduction of word embeddings in 2013,
notably with Mikolov et al.’s Word2Vec model, revolutionizing semantic
representation. RNNs and LSTM architectures followed, addressing
challenges in sequence processing and long-range dependencies. The
transformative shift arrived with the introduction of the transformer
architecture in 2017, allowing for parallel processing and significantly
improving training times.
In 2018, Google researchers Devlin et al. introduced BERT. BERT
adopted a bidirectional context prediction approach. During pretraining,
BERT is exposed to a masked language modeling task in which a random
subset of words in a sentence is masked and the model predicts those masked
words based on both left and right context. This bidirectional training allows
BERT to capture more nuanced contextual relationships between words. This
makes it particularly effective in tasks requiring a deep understanding of
context, such as question answering and sentiment analysis.
During the same period, OpenAI’s GPT series marked a paradigm shift in
NLP, starting with GPT in 2018 and progressing through GPT-2 in 2019, to
GPT-3 in 2020, and GPT-3.5-turbo, GPT-4, and GPT-4-turbo-visio (with
multimodal inputs) in 2023. As autoregressive models, these predict the next
token (which is an atomic element of natural language as it is elaborated by
machines) or word in a sequence based on the preceding context. GPT’s
autoregressive approach, predicting one token at a time, allows it to generate
coherent and contextually relevant text, showcasing versatility and language
understanding. The size of this model is huge, however. For example, GPT-3
has a massive scale of 175 billion parameters. (Detailed information about
GPT-3.5-turbo and GPT-4 are not available at the time of this writing.) The
fact is, these models can scale and generalize, thus reducing the need for task-
specific fine-tuning.
Functioning basics
The core principle guiding the functionality of most LLMs is autoregressive
language modeling, wherein the model takes input text and systematically
predicts the subsequent token or word (more on the difference between these
two terms shortly) in the sequence. This token-by-token prediction process is
crucial for generating coherent and contextually relevant text. However, as
emphasized by Yann LeCun, this approach can accumulate errors; if the N-th
token is incorrect, the model may persist in assuming its correctness,
potentially leading to inaccuracies in the generated text.
Until 2020, fine-tuning was the predominant method for tailoring models
to specific tasks. Recent advancements, however—particularly exemplified
by larger models like GPT-3—have introduced prompt engineering. This
allows these models to achieve task-specific outcomes without conventional
fine-tuning, relying instead on precise instructions provided as prompts.
Models such as those found in the GPT series are intricately crafted to
assimilate comprehensive knowledge about the syntax, semantics, and
underlying ontology inherent in human language corpora. While proficient at
capturing valuable linguistic information, it is imperative to acknowledge that
these models may also inherit inaccuracies and biases present in their training
corpora.
Embeddings
Tokenization and embeddings are closely related concepts in NLP.
Tokenization involves breaking down a sequence of text into smaller
units. These tokens are converted into IDs and serve as the basic building
blocks for the model to process textual information. Embeddings, on the
other hand, refer to the numerical and dense representations of these tokens in
a high-dimensional vector space, usually 1000+ dimensions.
Embeddings are generated through an embedding layer in the model, and
they encode semantic relationships and contextual information about the
tokens. The embedding layer essentially learns, during training, a distributed
representation for each token, enabling the model to understand the
relationships and similarities between words or subwords based on their
contextual usage.
Semantic search is made simple through embeddings: We can embed
different sentences and measure their distances in this 1000+ dimensional
space. The shorter the sentence is and the larger this high-dimensional space
is, the more accurate the semantic representation is. The inner goal of
embedding is to have words like queen and king close in the embedding
space, with woman being quite close to queen as well.
Embeddings can work on a word level, like Word2Vec (2013), or on a
sentence level, like OpenAI’s text-ada-002 (with its latest version released in
2022).
If an embedding model (a model that takes some text as input and outputs
a dense numerical vector) is usually the output of the encoding part of a
transformer model, for GPTs models it’s a different story. In fact, GPT-4 has
some inner embedding layers (word and positional) inside the attention
heads, while the proper embedding model (text-ada-002) is trained separately
and not directly used within GPT-4. Text-ada-002 is available just like the
text-generation model and is used for similarity search and similar use cases
(discussed later).
In summary, tokenization serves as the initial step in preparing textual
data for ML models, and embeddings enhance this process by creating
meaningful numerical representations that capture the semantic nuances and
contextual information of the tokens.
Training steps
The training of GPT-like language models involves several key phases, each
contributing to the model’s development and proficiency:
1. Initial training on crawl data
2. Supervised fine-tuning (SFT)
3. Reward modeling
4. Reinforcement learning from human feedback (RLHF)
Reward modeling
Once the model is fine-tuned with SFT, a reward model is created. Human
evaluators review and rate different model outputs based on quality,
relevance, accuracy, and other criteria. These ratings are used to create a
reward model that predicts the “reward” or rating for various outputs.
Inference
The inferring process is an autoregressive generation process that involves
iteratively calling the model with its own generated outputs, employing initial
inputs. During causal language modeling, a sequence of text tokens is taken
as input, and the model returns the probability distribution for the next token.
The non-deterministic aspect arises when selecting the next token from
this distribution, often achieved through sampling. However, some models
provide a seed option for deterministic outcomes.
The selection process can range from simple (choosing the most likely
token) to complex (involving various transformations). Parameters like
temperature influence the model’s creativity, with high temperatures yielding
a flatter probability distribution.
The iterative process continues until a stopping condition—ideally
determined by the model or a predefined maximum length—is reached.
When the model generates incorrect, nonsensical, or even false
information, it is called hallucination. When LLMs generate text, they
operate as prompt-based extrapolators, lacking the citation of specific training
data sources, as they are not designed as databases or search engines. The
process of abstraction—transforming both the prompt and training data—can
contribute to hallucination due to limited contextual understanding, leading to
potential information loss.
Despite being trained on trillions of tokens, as seen in the case of GPT-3
with nearly 1 TB of data, the weights of these models—determining their size
—are often 20% to 40% less than the original. Here, quantization is
employed to try to reduce weight size, truncating precision for weights.
However, LLMs are not engineered as proper lossless compressors, resulting
in information loss at some point; this is a possible heuristic explanation for
hallucination.
One more reason is an intrinsic limitation of LLMs as autoregressive
predictors. In fact, during the prediction of the next token, LLMs rely heavily
on the tokens within their context window belonging to the dataset
distribution, which is primarily composed of text written by humans. As we
execute LLMs and sample tokens from them, each sampled token
incrementally shifts the model slightly outside the distribution it was initially
trained on. The model’s actual input is generated partially by itself, and as we
extend the length of the sequence we aim to predict, we progressively move
the model beyond the familiar distribution it has learned.
Note
Hallucinations can be considered a feature in LLMs, especially
when seeking creativity and diversity. For instance, when
requesting a fantasy story plot from ChatGPT or other LLMs, the
objective is not replication but the generation of entirely new
characters, scenes, and storylines. This creative aspect relies on the
models not directly referencing the data on which they were trained,
allowing for imaginative and diverse outputs.
Multimodal models
Most ML models are trained and operate in a unimodal way, using a single
type of data—text, image, or audio. Multimodal models amalgamate
information from diverse modalities, encompassing elements like images and
text. Like humans, they can seamlessly navigate different data modes. They
are usually subject to a slightly different training process.
There are different types of multimodalities:
Multimodal input This includes the following:
Text and image input Multimodal input systems process both text and
image inputs. This configuration is beneficial for tasks like visual
question answering, where the model answers questions based on
combined text and image information.
Audio and text input Systems that consider both audio and text inputs
are valuable in applications like speech-to-text and multimodal
chatbots.
Multimodal output This includes the following:
Text and image output Some models generate both text and image
outputs simultaneously. This can be observed in tasks like text-to-
image synthesis or image captioning.
Audio and text output In scenarios where both audio and text outputs
are required, such as generating spoken responses based on textual
input, multimodal output models come into play.
Multimodal input and output This includes the following:
Text, image, and audio input Comprehensive multimodal systems
process text, image, and audio inputs collectively, enabling a broader
understanding of diverse data sources.
Text, image, and audio output Models that produce outputs in
multiple modalities offer versatile responses—for instance, generating
textual descriptions, images, and spoken content in response to a user
query.
The shift to multimodal models is exemplified by pioneering models like
DeepMind’s Flamingo, Salesforce’s BLIP, and Google’s PaLM-E. Now
OpenAI’s GPT-4-visio, a multimodal input model, has entered the market.
Given the current landscape, multimodal output (but input as well) can be
achieved by engineering existing systems and leveraging the integration
between different models. For instance, one can call OpenAI’s DALL-E for
generating an image based on a description from OpenAI GPT-4 or apply the
speech-to-text function from OpenAI Whisper and pass the result to GPT-4.
Note
Beyond enhancing user interaction, multimodal capabilities hold
promise for aiding visually impaired individuals in navigating both
the digital realm and the physical world.
AI engineering
Natural language programming, usually called prompt engineering,
represents a pivotal discipline in maximizing the capabilities of LLMs,
emphasizing the creation of effective prompts to guide LLMs in generating
desired outputs. For instance, when asking a model to “return a JSON list of
the cities mentioned in the following text,” a prompt engineer should know
how to rephrase the prompt (or know which tools and frameworks might
help) if the model starts returning introductory text before the proper JSON.
In the same way, a prompt engineer should know what prompts to use when
dealing with a base model versus an RLHF model.
With the introduction of OpenAI’s GPTs and the associated store, there’s
a perception that anyone can effortlessly develop an app powered by LLMs.
But is this perception accurate? If it were true, the resulting apps would likely
have little to no value, making them challenging to monetize. Fortunately, the
reality is that constructing a genuinely effective LLM-powered app entails
much more than simply crafting a single creative prompt.
Sometimes prompt engineering (which does not necessarily involve
crafting a single prompt, but rather several different prompts) itself isn’t
enough, and a more holistic view is needed. This helps explain why the
advent of LLMs-as-a-product has given rise to a new professional role
integral to unlocking the full potential of these models. Often called an AI
engineer, this role extends beyond mere prompting of models. It encompasses
the comprehensive design and implementation of infrastructure and glue code
essential for the seamless functioning of LLMs.
Specifically, it must deal with two key differences with respect to the
“simple” prompt engineering:
Explaining in detail to an LLM what one wants to achieve is roughly as
complex as writing traditional code, at least if one aims to maintain
control over the LLM’s behavior.
An application based on an LLM is, above all, an application. It is a
piece of traditional software executed on some infrastructure (mostly on
the cloud with microservices and all that cool stuff) and interacting with
other pieces of software (presumably APIs) that someone (perhaps
ourselves) has written. Moreover, most of the time, it is not a single
LLM that crafts the answer, but multiple LLMs, orchestrated with
different strategies (like agents in LangChain/Semantic Kernel or in an
AutoGen, multi-agent, style).
The connections between the various components of an LLM often
require “traditional” code. Even when things are facilitated for us (as with
assistants launched by OpenAI) and are low-code, we still need a precise
understanding of how the software functions to know how to write it.
Just because the success of an AI engineer doesn’t hinge on direct
experience in neural networks trainings, and an AI engineer can excel by
concentrating on the design, optimization, and orchestration of LLM-related
workflows, this doesn’t mean the AI engineer doesn’t need some knowledge
of the inner mechanisms and mathematics. However, it is true that the role is
more accessible to individuals with diverse skill sets.
LLM topology
In our exploration of language models and their applications, we now shift
our focus to the practical tools and platforms through which these models are
physically and technically used. The question arises: What form do these
models take? Do we need to download them onto the machines we use, or do
they exist in the form of APIs?
Before delving into the selection of a specific model, it’s crucial to
consider the type of model required for the use case: a basic model (and if so,
what kind—masked, causal, Seq2Seq), RLHF models, or custom fine-tuned
models. Generally, unless there are highly specific task or budgetary
requirements, larger RLHF models like GPT-4-turbo (as well as 4 and 3.5-
turbo) are suitable, as they have demonstrated remarkable versatility across
various tasks due to their robust generalization during training.
In this book, we will use OpenAI’s GPT models (from 3.5-turbo onward)
via Microsoft Azure. However, alternative options exist, and I’ll briefly touch
on them here.
Note
Data submitted to the Azure OpenAI service remains under the
governance of Microsoft Azure, with automatic encryption for all
persisted data. This ensures compliance with organizational security
requirements.
Users can interact with OpenAI and Azure OpenAI’s models through
REST APIs and through the Python SDK for both OpenAI and Azure
OpenAI. Both offer a web-based interface too: Playground for OpenAI and
the Azure OpenAI Studio. ChatGPT and Bing Chat are based on models
hosted by OpenAI and Microsoft Azure OpenAI, respectively.
Note
Azure OpenAI is set for GPT-3+ models. However, one can use
another Microsoft product, Azure Machine Learning Studio, to
create models from several sources (like Azure ML and Hugging
Face, with more than 200,000 open-source models) and import
custom and fine-tuned models.
Note
Notable alternatives to Hugging Face include Google Cloud AI,
Mosaic, CognitiveScale, NVIDIA’s pretrained models, Cohere for
enterprise, and task-specific solutions like Amazon Lex and
Comprehend, aligning with Azure’s Cognitive Services.
The current LLM stack
LLMs can be used as a software development tool (think GitHub Copilot,
based on Codex models) or as a tool to integrate in applications. When used
as a tool for applications, LLMs make it possible to develop applications that
would be unthinkable without them.
Currently, an LLM-based application follows a fairly standard workflow.
This workflow, however, is different from that of traditional software
applications. Moreover, the technology stack is still being defined and may
look different within a matter of a few months.
In any case, the workflow is as follows:
1. Test the simple flow and prompts. This is usually via Azure OpenAI
Studio in the Prompt Flow section, or via Humanloop, Nat.dev, or the
native OpenAI Playground.
2. Conceive a real-world LLM application to work with the user in
response to its queries. Versel, Streamlit, and Steamship are common
frameworks for application hosting. However, the application hosting is
merely a web front end, so any web UI framework will do, including
React and ASP.NET.
3. When the user’s query leaves the browser (or WhatsApp, Telegram, or
whatever), a data-filter tool ensures that no unauthorized data makes it to
the LLM engine. A layer that monitors for abuse may also be involved,
even if Azure OpenAI has a default shield for it.
4. The combined action of the prompt and of orchestrators such as
LangChain and Semantic Kernel (or a custom-made piece of software)
builds the actual business logic. This orchestration block is the core of
an LLM application. This process usually involves augmenting the
available data using data pipelines like Databricks and Airflow; other
tools like LlamaIndex (which can be used as an orchestrator too); and
vector databases like Chroma, Pinecone, Qdrant, and Weaviate—all
working with an embedding model to deal with unstructured or semi-
structured data.
5. The orchestrator may need to call into an external, proprietary API,
OpenAPI documented feeds, and/or ad hoc data services, including
native queries to databases (SQL or NoSQL). As the data is passed
around, the use of some cache is helpful. Frequently used libraries
include GPTCache and Redis.
6. The output generated by the LLM engine can be further checked to
ensure that unwanted data is not presented to the user interface and/or a
specific output format is obtained. This is usually performed via
Guardrails, LMQL, or Microsoft Guidance.
7. The full pipeline is logged to LangSmith, MLFlow, Helicone,
Humanloop, or Azure AppInsights. Some of these tools offer a
streamlined UI to evaluate production models. For this purpose, the
Weight & Biases AI platform is another viable option.
Future perspective
The earliest LLMs were a pipeline of three simpler neural networks like
RNNs, CNNs, and LSTMs. Although they offered several advantages over
traditional rule-based systems, they were far inferior to today’s LLMs in
terms of power. The significant advancement came with the introduction of
the transformer model in 2017.
Companies and research centers seem eager to build and release more and
more advanced models, and in the eyes of many, the point of technological
singularity is just around the corner.
As you may know, technological singularity describes a time in some
hypothetical future when technology becomes uncontrollable, leading to
unforeseeable changes in human life. Singularity is often associated with the
development of some artificial superintelligence that surpasses human
intelligence across all domains. Are LLMs the first (decisive) step toward this
kind of abyss? To answer this question about our future, it is necessary to
first gain some understanding of our present.
Current developments
In the pre-ChatGPT landscape, LLMs were primarily considered research
endeavors, characterized by rough edges in terms of ease of use and cost
scaling. The emergence of ChatGPT, however, has revealed a nuanced
understanding of LLMs, acknowledging a diverse range of capabilities in
costs, inference, prediction, and control. Open-source development is a
prominent player, aiming to create LLMs more capable for specific needs,
albeit less cumulatively capable. Open-source models differ significantly
from proprietary models due to different starting points, datasets, evaluations,
and team structures. The decentralized nature of open source, with numerous
small teams reproducing ideas, fosters diversity and experimentation.
However, challenges such as production scalability exist.
Development paths have taken an interesting turn, emphasizing the
significance of base models as the reset point for wide trees of open models.
This approach offers open-source opportunities to advance, despite
challenges in cumulative capabilities compared to proprietary models like
GPT-4-turbo. In fact, different starting points, datasets, evaluation methods,
and team structures contribute to diversity in open-source LLMs. Open-
source models aim to beat GPT-4 on specific targets rather than replicating its
giant scorecard.
Big tech, both vertical and horizontal, plays a crucial role. Vertical big
tech, like OpenAI, tends to keep development within a walled garden, while
horizontal big tech encourages the proliferation of open source. In terms of
specific tech organizations, Meta is a horizontal player. It has aggressively
pursued a “semi” open-source strategy. That is, although Llama 2 is free, the
license is still limited and, as of today, does not meet all the requirements of
the Open Source Initiative.
Other big tech players are pursuing commercially licensed models, with
Apple investing in its Ajax, Google in its Gemini, PaLMs and Flan-T5, and
Amazon in Olympus and Lex. Of course, beyond the specific LLMs backing
their applications, they’re all actively working on incorporating AI into
productivity tools, as Microsoft quickly did with Bing (integrated with
OpenAI’s GPTs) and all its products.
Microsoft’s approach stands out, leveraging its investment in OpenAI to
focus more on generative AI applications rather than building base models.
Microsoft’s efforts extend to creating software pieces and architecture around
LLMs—such as Semantic Kernel for orchestration, Guidance for model
guidance, and AutoGen for multi-agent conversations—showcasing a holistic
engineering perspective in optimizing LLMs. Microsoft also stands out in
developing “small” models, sometimes called small language models
(SLMs), like Phi-2.
Indeed, engineering plays a crucial role in the overall development and
optimization process, extending beyond the realm of pure models. While
direct comparisons between full production pieces and base models might not
be entirely accurate due to their distinct functionalities and the engineering
involved in crafting products, it remains essential to strive to maximize the
potential of these models within one’s means in terms of affordability. In this
context, OpenAI’s strategy to lower prices, announced along with GPT-4-
turbo in November 2023, plays a key role.
The academic sector is also influential, contributing new ways of
maximizing LLM performance. Academic contributions to LLMs include
developing new methods to extract more value from limited resources and
pushing the performance ceiling higher. However, the landscape is changing,
and there has been a shift toward collaboration with industry. Academia often
engages in partnerships with big tech companies, contributing to joint
projects and research initiatives. New and revolutionary ideas—perhaps
needed for proper artificial general intelligence (AGI)—often come from
there.
Mentioning specific models is challenging and pointless, as new open-
source models are released on a weekly basis, and even big tech companies
announce significant updates every quarter. The evolving dynamics suggest
that the development paths of LLMs will continue to unfold, with big tech,
open source, and academia playing distinctive roles in shaping the future of
these models.
Speed of adoption
Considering that ChatGPT counted more than 100 million active users within
two months of its launch, the rapid adoption of LLMs is evident. As
highlighted by various surveys during 2023, more than half of data scientists
and engineers plan to deploy LLM applications into production in the next
months. This surge in adoption reflects the transformative potential of LLMs,
exemplified by models like OpenAI’s GPT-4, which show sparks of AGI.
Despite concerns about potential pitfalls, such as biases and hallucinations, a
flash poll conducted in April 2023 revealed that 8.3% of ML teams have
already deployed LLM applications into production since the launch of
ChatGPT in November 2022.
However, adopting an LLM solution in an enterprise is more problematic
than it may seem at first. We all experienced the immediacy of ChatGPT and,
sooner or later, we all started dreaming of having some analogous chatbot
trained on our own data and documents. This is a relatively common
scenario, and not even the most complex one. Nonetheless, adopting an LLM
requires a streamlined and efficient workflow, prompt engineering,
deployment, and fine-tuning, not to mention an organizational and technical
effort to create and store needed embeddings. In other words, adopting an
LLM is a business project that needs adequate planning and resources, not a
quick plug-in to some existing platform.
With LLMs exhibiting a tendency to hallucinate, reliability remains a
significant concern, necessitating human-in-the-loop solutions for
verification. Privacy attacks and biases in LLM outputs raise ethical
considerations, emphasizing the importance of diverse training datasets and
continuous monitoring. Mitigating misinformation requires clean and
accurate data, temperature setting adjustments, and robust foundational
models.
Additionally, the cost of inference and model training poses financial
challenges, although these are expected to decrease over time. Generally, the
use of LLM models requires some type of hosting cloud via API or an
executor, which may be an issue for some corporations. However, hosting or
executing in-house may be costly and less effective.
The adoption of LLMs is comparable to the adoption of web technologies
25 years ago. The more companies moved to the web, the faster technologies
Random documents with unrelated
content Scribd suggests to you:
Poa fertilis.
*THE RHUBARBS.
The Rhubarbs, from their vigour and picturesqueness, are well worthy of
cultivation among hardy, fine-leaved plants. They are so hardy that they
may be planted in any soil, and afterwards left to take care of themselves.
Their fine leaves and bold habit make them valuable ornaments for the
margins of shrubberies (the best way is to plant one singly a few feet from
the margin of the shrubbery, so that when they die down in autumn no blank
may be seen), and for semi-wild places where a very free and luxuriant type
of vegetation is desired. Though not particular as to soil, they enjoy it when
it is deep and rich, and the more it is made so the better they will grow.
Rheum Emodi is undoubtedly the handsomest and most distinct of the
genus in cultivation. The figure conveys an accurate idea of the outline of
its leaves, and of its aspect when in flower. The large leaves have their veins
red, which distinguishes it from any other species. It has a large and deep-
feeding root, black on the outside, and yellow within. The flowers are very
small, of a yellowish white. It comes up somewhat later than the common
kinds, and is not by any means common, though it may be found in botanic
gardens and nurseries where collections of herbaceous plants are formed. It
may, like all the species, be increased by division, but a young plant should
not be disturbed for several years after being planted. It is a native of
Nepaul.
The palmated rhubarb, Rheum palmatum, is immediately distinguished
from its cultivated fellows by its leaves
RHEUM EMODI.
Hardy herbaceous fine-foliaged Type.
than eighteen inches high. The heads of some are branched, but these are
not less elegant than when in a simple-stemmed state, so that here we have
clearly a subject that will afford a charming fern-like effect in the full sun,
and add graceful verdure and distinction to the flower-garden. When the
flowers show after the plant is a few years old, they may be pinched off; but
this need only be practised in the case of permanent groups or plantings of
it. To produce the effect of a Grevillea or a fern on a small scale, we should
of course keep this graceful Rhus small and propagate it like a bedding-
plant. Like most other shrubs, it has a tendency to branch; but to fully enjoy
the beauty of the leaves it is best to cut down the plants yearly, as then the
leaves given off from the simple erect stem are much larger and more
graceful. It will, however, be necessary to allow it to become established
before treating it in this way, as it is at present comparatively new to our
gardens. The figure, sketched early in August, represents a young plant little
more than a foot high, which had been cut down to the ground during the
spring of the past year, and proves that its full beauty may be enjoyed in a
very small state. It may be most tastefully used in association with bedding-
plants, or on banks in or near the rock-garden or hardy fernery, planting it in
light sandy loam. The graceful mixtures and bouquet-*like beds that might
be made with the aid of such plants need not be suggested here, while of
course an established plant, or groups of three, might well form the centre
of a bed. Planting a very small bed or group separately in the flower-garden,
and many other uses which cannot be enumerated here, will occur to those
who have once tried it. Some hardy plants of fine foliage are either so
rampant or so topheavy that they cannot be wisely associated with bedding-
plants. This is, on the contrary, as tidy and tractable a grower as the most
fastidious could desire. It would be a mistake to put such a pretty plant
under or near rough trees and shrubs. Give it the full sun, and good free
soil.
*Rhus vernicifera is distinct from the preceding, and has fine leaves. It
is a native of Japan, and the source of the best Japan varnish according to
Thunberg. Useful for grouping with the preceding or other hardy shrubs of
like character.
*Ricinus communis (Castor-oil Plant).—When well grown in the open
air, there is not in the whole range of cultivated plants a more imposing
subject than this. It may have been seen nearly 12 ft. high in the London
parks of late years, and with leaves nearly 1 yd. wide. It is true we require a
bed of very rich deep earth under it to make it attain such dimensions and
beauty; but in all parts, and with ordinary attention, it grows well. In warm
countries, in which the plant is very widely cultivated, it becomes a small
tree, but is much prettier in the state in which it is seen with us—i.e., with
an unbranched stem clothed from top to bottom with noble leaves. Soon
after it betrays, a tendency to develope side-shoots the cold autumn comes
and puts an end to all further progress; and so much the better, because it is
much handsomer in a simple-stemmed state than any other. The same is true
of not a few other large-leaved plants—once they break into a number of
side-shoots their leaf beauty is to a great extent lost. It is as easily raised
from seed as the common bean, requiring, however, to be raised in heat. It
should be sown about the middle of February, and the plants gradually
hardened off so as to be fit to put out by the middle of May. The Ricinus is a
grand plant for making bold and noble beds near those of the more brilliant
flowers, and tends to vary the flower-garden finely. It is not well to
associate it closely with bedding-plants, in consequence of the strong
growth and shading power of the leaves, so to speak. A good plan is to
make a compact group of the plant in the centre of some wide circular bed
and surround it with a band of a dwarfer subject, say the Aralia or
Caladium, and then finish with whatever arrangement of the flowering
plants may be most admired. A bold and striking centre may be obtained,
while the effect of the flowers is much enhanced, especially if the planting
be nicely graduated and tastefully done. For such groups the varieties of the
Castor-oil plant are not likely to be surpassed. East Indies.
The most notable varieties are R. c. sanguineus, the stem, leaf-stalks,
young leaves, and fruit of which are of a blood-red colour; R. c.
borboniensis, which in southern climates often attains the extraordinary
height of 26 ft. in one year; R. c. giganteus, a very tall kind from the
Philippine Islands.
Other kinds in cultivation are R. Belot Desfougerès (a very tall and
branching kind), R. viridis (of a uniform lively green colour), R. insignis, R.
africanus, R. africanus albidus, R. minor, R. hybridus, R. microcarpus.
The better and richer the soil, and the warmer the position, the more
vigorous will be the growth of any of the above. Copious watering in
summer is indispensable.
*Rumex Hydrolapathum.—A very large native water-plant of a size
and habit sufficiently striking to entitle it to a place amongst ornamental
subjects by the water-side. The radical long-stalked leaves, which are
sometimes 2 ft. or more in length, form erect tufts of a very imposing
character. The flowering-stem is frequently 6 ft. in height, and bears a very
large, dense, pyramidal panicle of a reddish or olive-fawn colour. The plant
is most effective in autumn, when the leaves change to a lurid red colour,
which they retain for some time.
*Saccharum ægyptiacum.—A vigorous perennial grass, forming ample
tufts of reed-like downy stems 6½ ft. to 13 ft. high, and clothed with very
graceful foliage, well adapted for ornamenting the margins of pieces of
water, the slopes and other parts of pleasure-grounds, etc., in a warm
position. In our climate it does not flower, but even without its fine feathery
plumes it is a pretty plant from its foliage and habit alone. Easily and
quickly multiplied by division in spring; the offsets to be started in a frame
or pit. When established they may be planted out in May or June. N. Africa.
*Sagittaria sagittifolia.—A British water-plant, affording the most
remarkable example of the arrow-shaped leaf to be met with among hardy
plants. These leaves stand erect, from 1 ft. to 1½ ft. above the water, and
from the middle of the tuft the flowering-stem rises in August to the height
of 1½ ft. to 2½ ft. The flowers are of a pale rosy-white colour. There is a
variety with double flowers (S. sagittifolia flore pleno), which resemble the
flowers of the double Rocket. Both the double and single kinds should have
a place among water or bog plants.
SEAFORTHIA ELEGANS.
Conservatory Palm; standing well in the open air in summer.
SOLANUM WARSCEWICZII.
Tender Section; making noble leaves in the open garden in summer.
years’ growth. The foliage is of a light-green colour, tinged here and there
with rose, and sparsely armed with spines; the young unfolded leaves are
slightly tinged with violet. Flowers numerous, small and white, appearing
when the plant is two or three years old. A good kind which has been little
tried in England. Venezuela.
Solanum Warscewiczii.—A very fine and ornamental kind, resembling
S. macranthum, but with a lower and more thickset habit, and branching
more at the base. The leaf-stalks also, and upper branches, are of a red
colour, glandular, and scaly; and the flowers are white and small. The stem
is armed with strong slightly recurved spines, and both the stems and the
petioles of the leaves are covered with a very dense crop of short stiff
brown hairs scarcely rising above the skin. This is one of the handsomest
and best kinds we have.
Sonchus laciniatus.—A very graceful composite plant, from Madeira,
with a stout stem, growing to a height of more than 5 ft., and large deeply-
cut leaves with linear-lance-shaped segments. Flower-heads yellow. When
grouped on grass-plats, or open spaces in pleasure-grounds, the fine foliage
of this plant is seen to very great advantage; but being so slender and
delicate the plants must be placed where they may be seen. It should be
planted out at the end of May, and thrives best in rich, substantial soil, in a
warm sunny position. Very numerous varieties, with the leaves variously
divided and of various shades of green, have been advertised in catalogues
under specific names, as S. lyratus, S. gummiferus, etc., etc. Many of these
are quite as charming as the type, and are well adapted for the same uses.
*Sorghum halepense.—A handsome hardy grass from S. Europe, N.
Africa, and Syria, with an erect stem about 3½ ft. high, and broad flat
leaves more than 1 ft. long, chiefly collected round the base of the plant. It
is most attractive when in flower in the end of summer, the inflorescence
consisting of a dense panicle of purplish awned flowers. Suitable for
isolation, groups, or borders.
Sparmannia africana.—A beautiful flowering stove-shrub from 3 ft. to
12 ft. high, very much resembling a Malva in habit, with long-stalked,
heart-shaped, lobed leaves, clothed with soft down, and numerous pretty
white flowers produced in stalked umbels. It thrives freely in the open air in
the south of England, from the end of May to October, if planted in rich
light soil and in warm positions. Cape of Good Hope.
*Spiræa Aruncus.—This is a remarkably handsome and effective plant,
from 3½ ft. to 5 ft. high, with elegantly-divided leaves, which bear some
resemblance to the fronds of certain ferns. The flowers are white, and are
disposed above the foliage in graceful, airy plumes. A cool, peaty soil, and a
slightly-shaded position, are best suited for this plant, and it may be placed
with advantage on slopes with a north aspect, the banks of streams or pieces
of water, in glades, and thinly-planted shrubberies, etc. Division. Siberia.
*Spiræa Filipendula.—A hardy, native perennial, with elegant foliage
and handsome flowers. The leaves are mostly radical, very finely cut, and
form a loosely-spreading rosette. The flower-stems rise to a height of 1½ ft.
to 2 ft., and are terminated by dense panicles of rosy-white flowers. There is
a fine variety with double flowers. This plant is included here only in
consequence of the resemblance of its leaves to a pinnate-leaved fern. By
pinching off the flowers it may be used with good effect as a green, fern-
like edging plant, and it is pretty in borders. Division in winter or spring.
*Spiræa (Hoteia) japonica.—A handsome, herbaceous perennial,
forming rich tufts of dark shining green much-divided leaves, which have a
somewhat fern-like appearance. These tufts are usually from a foot to 16
ins. high. The flowers are very freely produced in graceful panicles, of
which the bracts, little flower-stems, and all the ramifications are, like the
flowers, white. It is particularly fond of a sandy peat, or very sandy loam, a
sheltered position, and moist soil. Multiplied by division of the tufts in
spring or the end of summer. Japan.
*Spiræa Lindleyana.—A graceful shrub, with erect stems, from 6½ ft.
to nearly 10 ft. high, and large compound leaves, with finely-toothed
leaflets. Flowers late in summer, white, in very large and handsome
terminal panicles. This well-known plant is second to none for its grace and
distinctness, both of foliage and flower. It is a native of the Himalayas, and
easily procured in our nurseries; it should receive far more attention than
the majority of our shrubs do, and should be employed both in a young and
fully-grown state in and near the flower-garden. Few things, tender or
hardy, known in our gardens, afford a better effect than may be obtained
from this.
It is probably one of those plants which would look exceedingly
effective if trained to a single stem and cut down every year, as
recommended for the Ailantus and the Paulownia; but I have had no
experience of it in this way, and its natural habit is sufficiently graceful.
Stadmannia Jonghei.—A tall and stately foliage-plant from Australia,
where it attains the dimensions of a small tree, with dark shining green
pinnate leaves; the divisions oblong-pointed, with serrated margins, and of a
paler colour underneath. Bears the open air of the southern counties in
summer well, if placed in sunny and sheltered spots.
*Statice latifolia.—A hardy and very ornamental herbaceous perennial
from Russia, with broad leaves, which form a rosette or tuft more or less
spreading. The flower-stem is more than 2 ft. high, and very much
branched; the branches commencing at from 4 ins. to 8 ins. above the
ground, and forming a large and exceedingly handsome panicle of flowers
of a light-blue colour, tinged with the greyish hue of the numerous
membranous bracts and thin dry calyces. A well-drained, sandy soil, in an
open sunny position, is the best for this plant, which, however, grows in any
ordinary garden-soil, and is admirably adapted for naturalisation or
grouping with the acanthuses, tritomas, etc., the effect of the inflorescence
being very remarkable.
*Stipa pennata (Feather-grass).—This plant, which at other times is
hardly to be distinguished from a strong, stiff tuft of common grass,
presents, in May and June, a very different appearance, the tuft being then
surmounted by numerous flower-stems, nearly 2 ft. high, gracefully
arching, and densely covered, for a considerable part of their upper
extremity, with long, twisted, feathery awns. It loves a deep, sandy loam,
and may be used with fair effect in groups of small plants, or isolated; but
its flowers continue too short a time in bloom to make it very valuable away
from borders.
*Struthiopteris germanica.—One of the most elegant hardy ferns, with
fronds resembling ostrich-plumes in shape, nearly 3 ft. long, and arranged
in a somewhat erect, vase-like rosette. It is particularly suited for the
embellishment of the slopes of pleasure-grounds, cascades, grottoes, and
rough rockwork, the margins of streams and pieces of water, and will thrive
in moist and deep sandy soil, either in the full sunshine or in the shade. S.
pennsylvanica very closely resembles S. germanica, the chief point of
difference being the narrowness of the fertile fronds of the former species.
Both kinds will prove very effective in adding beauty of form to a garden,
and should by no means be confined to the fernery proper. Central Europe.
*Tamarix.—These very elegant hardy shrubs may be used with
excellent effect in the flower-garden and pleasure-ground, though they are
at present seldom employed in these places. T. gallica or anglica is found
apparently wild in several parts of the south of England, and other kinds,
such as germanica, parviflora, tetrandra, spectabilis, and indica, are also in
cultivation. In the neighbourhood of Paris T. indica thrives very freely, and
forms beautiful hedges, but is cut down by frost during some winters. It
would probably do better in the south of England. The plants have minute
leaves and very elegantly-panicled branches, which gives them a feathery
effect, somewhat like that of the most graceful conifers, and, if possible,
more elegant: the roseate panicles of small flowers are also very pretty. A
finer effect would be obtained from these shrubs by isolating them on the
grass than in any other way.
*Tanacetum vulgare var. crispum.—A very elegant variety of the
common tansy, much dwarfer in stature, and with smaller emerald-green
leaves, which are very elegantly cut, and have a crisped or frizzled
appearance. It is quite hardy, and forms an effective ornament on the
margins of shrubberies, near rockwork, etc. It does best fully exposed, and
probably the only way in which it can be benefited after planting—in deep
and rather moist soil it does best, but will grow “anywhere”—is by thinning
out the shoots in spring, so that each remaining one shall have free room to
suspend its exquisite leaves; thinned thus, it looks much better than when
the stems are crowded, and of course, if it is done in time, they individually
attain more strength and dignity. The flowers should be pinched off before
they open. Britain.
Thalia dealbata.—This is one of the finest aquatic plants which we can
employ in the embellishment of pieces of water, streams, etc. In a warm and
sheltered position, and on a substantial and rich bottom, it grows
vigorously, sometimes attaining a height of 6 ft. The best mode of growing
it is in pots or tubs pierced with holes, in a mixture of stiff peat and clayey
soil, with a portion of river-mud and sand. In winter these pots or tubs may
be submerged to a greater depth, and the plants be thus effectually
protected. It would not attain the above size out of doors except in warm
places in the southern counties, in which it might be planted out directly
without taking the precautions above described. It is generally grown in the
stove in this country. N. America.
*Thalictrum minus.—One of the most elegant-leaved of our native
plants, forming compact, roundish bushes, from a foot to 18 ins. high, very
symmetrical, and of a slightly glaucous hue. It may be grown in any soil,
and requires only one little attention, namely, to pinch off the slender
flower-stems that appear in May and June. Not alone in its aspect, as a little
bushy tuft, does it resemble the “Maidenhair Fern,” as Adiantum cuneatum
is often called, but the leaves are almost pretty enough to pass, when
mingled with flowers, for those of the fern; they are also stiffer and more
lasting than fern-leaves, and are well suited for mingling with vases of
flowers, etc. There are probably several “forms” or varieties of this plant. It
would look very pretty isolated in large tufts as an edging, or in borders, or
in groups of dwarf subjects. Easily increased by division.
*The Tritomas.—So hardy, so magnificent in colouring, and so fine in
form are these plants, that we can no more dispense with their use in the
garden where beauty of form as well as colour is to prevail, than we can
with the noble Pampas grass. They are more conspicuously beautiful, when
other things begin to succumb before the gusts and heavy rains of autumn,
than any plants which flower in the bright days of midsummer. It is not
alone as component parts of large ribbon-borders and in such positions that
these grand plants are useful, but in almost any part of the garden.
Springing up as a bold, close group on the green turf, and away from
brilliant surroundings, they are more effective than when associated with
bedding plants; and of course many such spots may be found for them near
the margins of the shrubberies in most pleasure-grounds. It is in an isolated
group, flaming up amid the verdure of trees and shrubs and grass, that their
dignified aspect and brilliant colour are seen to best advantage. However,
tastefully disposed in the flower-garden, they will prove generally useful,
and particularly for association with the finer autumn-flowering herbaceous
plants. A most satisfactory result may be produced by associating the
Tritomas with the Pampas grass and the two Arundos, the large Statice
latifolia, and the strong and beautiful autumn-flowering Anemone japonica
alba, which is peculiarly suited for association with hardy herbaceous
plants of fine habit, and should be in every garden where a hardy flower is
valued.
The Tritomas are not fastidious as to soil, and with a little preparation of
the ground may be grown almost anywhere. They thrive with extraordinary
vigour and freedom where the soil is very sandy as well as rich and deep,
and are readily multiplied by division.
As every garden should be embellished by well-developed specimens or
groups of these fine plants, those who have very poor and thin, or pure clay
soils, would do well to excavate the ground to the depth of 2 ft. or 3 ft., and
fill in with good rich loam. When the soil is deep, no watering will be
required.
*Tritoma Burchelli.—This kind is distinguished by the lighter green of
its leaves, by its black-spotted flower-stem, and especially by the colour of
its flowers, which are crimson at the base, passing into carmine in the
middle, and pale-yellow or greenish at the tips. There is a variety which has
the leaves variegated or striped with white, but it is somewhat tender and
rare.
*Tritoma glauca.—A dwarfer kind than T. Uvaria, with leaves of a sea-
green colour, and very large spikes of scarlet-and-yellow flowers, which,
when in bud, are hidden by long, sea-green bracts, streaked and rayed with
white. There is a scarce variety with recurved leaves (T. g. recurvata),
which has somewhat of the habit of a Bromelia. S. Africa.
*Tritoma præcox.—A recently-introduced, handsome, hardy perennial,
with very much the habit of T. Uvaria. The flower-stem grows from 20 ins.
to 2 ft. high, and the flowers, which are produced about the middle of May,
are of a bright-red colour when exposed to the full sun, and of a bright-
yellow when grown in the shade. The leaves are fully 2 ft. long, sharply
keeled, and with toothed edges. S. Africa.
*Tritoma Uvaria.—A very ornamental and well-known kind from S.
Africa, forming thick tufts of linear, erect leaves. It is a vigorous grower,
and small specimens have been known in three years to form tufts from 3 ft.
to 4 ft. through, bearing from 50 to 100 flower-spikes. The flowering-stems
are about 3¼ ft. in height, and the flowers are borne in dense conical
clusters at the top. The upper part of the cluster, containing the young
flowers, is of a coral-red colour, the lower part yellow, all the flowers
gradually changing to this colour. Other varieties in cultivation are—T. U.
grandis or grandiflora, which is much taller than the preceding kind, with
stouter stems and larger flower-spikes; T. U. Rooperi, which only differs
from the type in being somewhat dwarfer in habit and having softish or
flaccid leaves, frequently falling forward; it also flowers later; and T. U.
Lindleyana, which has erect, very rigid leaves, and more deeply-coloured
flowers than the type.
Tupidanthus calyptratus.—A noble subtropical plant from Bengal,
standing in the open air from the beginning of June till October without the
slightest injury. The leaves are large, deeply-divided, and of a dark shining
green colour. It requires stove treatment in winter and spring, and is suitable
for beds or planting singly.
*Typha latifolia (Reed-Mace).—A native aquatic plant, growing in tufts
of 2-rowed flat leaves from 1½ ft. to 2 ft. long, and 1 in. or 1½ in. wide.
From the centre of each tuft springs a stem 6 ft. or 7 ft. high, which in the
flowering season is terminated by a close cylindrical spike 9 ins. long, and
of a dark-olive colour, changing to a brownish-black as it ripens. This is one
of the most striking and ornamental of our British water-plants, and may be
used with excellent effect grouped with such subjects as the Great Water-
Dock.
*Typha angustifolia resembles the preceding species in all respects
except in the size of its leaves and spike. The leaves are about ½ in. wide
and the spike about ½ in. in diameter, and something shorter than that of T.
latifolia. Of the two it is perhaps the more graceful in aspect.
Uhdea bipinnatifida.—This is one of the most useful plants in its class,
producing a rich mass of handsome leaves, with somewhat the aspect of
those of the great cow-parsnips, but of a more refined type. The foliage has
a slightly silvery tone, and the plant continues to grow fresh and vigorously
till late in autumn. It is well suited for forming rich masses of foliage, not so
tall, however, as those formed by such things as Ricinus or Ferdinanda. It is
freely propagated by cuttings taken from old plants kept in a cool stove,
greenhouse, or pit during the winter months, and placed in heat to afford
cuttings freely in early spring. Under ordinary cutting treatment on hotbeds
or in a moist warm propagating house, it grows as freely as could be
desired, and may be planted out at the end of May or the beginning of June.
Mexico.
Uhdea bipinnatifida.
Uhdea pyramidata.—This kind has been less cultivated in England
than the preceding, from which it is distinct in appearance. It is of a lighter
and fresher green, and inclined to grow larger in habit, having more of the
aspect of a Malva in foliage. Useful for the same purposes as the preceding
kind, but not so valuable.
*Veratrum album (White Hellebore).—A handsome, erect perennial of
pyramidal habit, 3½ ft. to 5 ft. high, with curiously plaited leaves 1 ft. long
and 6 ins. to 8 ins. broad, regularly alternating on the stem and overlapping
each other at the base. The flowers, of a yellowish-white colour, are borne
in numerous dense spikes on the top of the stem, forming a large panicle.
The leaves being handsome, it is worth a place in full collections of fine-
foliaged hardy herbaceous plants, and would look to best advantage in
small groups in the rougher parts of the pleasure-ground and by wood-
walks. Thrives best in peaty soil, and is best multiplied by division, as the
seed is very slow and capricious in germinating, sometimes not starting
until the second year, and it is some years before the seedlings are strong
enough to flower. The root of this plant is exceedingly poisonous. V. nigrum
differs from V. album, in having more slender stems, narrower leaves, and
blackish-purple flowers. V. viridiflorum resembles V. album in every
respect, except that its flowers are of a lively green colour. France.
*Verbascum Chaixii.—Most of us know how very distinct and
imposing are the larger Verbascums, and those who have attempted their
culture must soon have found out what far-seeding things they are. Of a
biennial character, their culture is most unsatisfactory: they either migrate
into the adjoining shrubbery or disappear altogether. The possession of a
fine perennial species must therefore be a desideratum, and such a plant will
be found in Verbascum Chaixii. This is fine in leaf and stature, and
produces abundance of flowers. The lower leaves grow 18 ins. or 20 ins.
long, and the plant when in flower reaches a height of 7 ft. or 8 ft., or even
more when in good soil. It is a truly distinct subject, and may, it is to be
hoped, ere long be found common in our gardens and nurseries. Like the
preceding, but grown under the name V. vernale, is a kind I saw in the
Jardin des Plantes at Paris, and introduced into cultivation in England; but it
is as yet scarce.
Verbesina gigantea.—An ornamental shrub from Jamaica, about 6½ ft.
high, forming, when young, a very pleasing subject for decorative purposes,
its round green stems being covered with large, winged, pinnate leaves of a
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
ebookbell.com