PostgreSQL Developer s Guide 1st Edition Ahmed - Download the ebook and start exploring right away
PostgreSQL Developer s Guide 1st Edition Ahmed - Download the ebook and start exploring right away
com
https://ebookgate.com/product/postgresql-developer-s-
guide-1st-edition-ahmed/
OR CLICK BUTTON
DOWLOAD EBOOK
https://ebookgate.com/product/postgresql-9-6-high-performance-1st-
edition-ibrar-ahmed/
ebookgate.com
https://ebookgate.com/product/unlocking-android-a-developer-s-
guide-1st-edition-frank-ableson/
ebookgate.com
https://ebookgate.com/product/common-warehouse-metamodel-developer-s-
guide-1st-edition-john-poole/
ebookgate.com
https://ebookgate.com/product/actionscript-developer-s-guide-to-
robotlegs-1st-edition-joel-hooks/
ebookgate.com
Magento PHP Developer s Guide 2nd Edition Allan Macgregor
https://ebookgate.com/product/magento-php-developer-s-guide-2nd-
edition-allan-macgregor/
ebookgate.com
https://ebookgate.com/product/wittgenstein-s-philosophical-
investigations-a-critical-guide-1st-edition-arif-ahmed/
ebookgate.com
https://ebookgate.com/product/oracle9i-pl-sql-a-developer-s-guide-1st-
edition-bulusu-lakshman-auth/
ebookgate.com
https://ebookgate.com/product/troubleshooting-postgresql-1st-edition-
schonig/
ebookgate.com
https://ebookgate.com/product/advanced-oracle-pl-sql-developer-s-
guide-2nd-edition-saurabh-k-gupta/
ebookgate.com
PostgreSQL Developer's Guide
Ibrar Ahmed
Asif Fayyaz
Amjad Shahzad
BIRMINGHAM - MUMBAI
PostgreSQL Developer's Guide
All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, without the prior written
permission of the publisher, except in the case of brief quotations embedded in
critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy
of the information presented. However, the information contained in this book is
sold without warranty, either express or implied. Neither the authors, nor Packt
Publishing, and its dealers and distributors will be held liable for any damages
caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
ISBN 978-1-78398-902-7
www.packtpub.com
Credits
Technical Editor
Gaurav Suri
About the Authors
I would like to thank my loving parents for everything they did for
me. Personal time always belongs to family, and I did this in my
personal time, so thanks to my family for all the support. I would also
like to thank Mr. Richard Harvey, who encouraged me to write the
book, and my early age mentor, Mr. Mahmood Hussain, who helped
me a lot at the start of my professional career. It has been a really great
experience to work with Amjad Shahzad and Asif Fayyaz.
Apart from his professional activities, he, along with his dedicated friends,
is keen to find ways that can make life easier for those who are facing the worst
living conditions.
His other passions include, but are not limited to, traveling to different places, trying
different cuisines, and reading books if somehow permitted by his loving family.
Amjad Shahzad has been working in the open source software industry for the
last 10 years. He is currently working as a senior quality assurance engineer at
a leading PostgreSQL-based company, which is the only worldwide provider of
enterprise-class products and services based on PostgreSQL. Amjad's core expertise
lies in the areas of pg_upgrade, slony and streaming replication, Cloud database,
and database partitioning. His future endeavors include exploring PostgreSQL
replication solutions.
Apart from his professional activities, he is also involved in doing social activities that
involve helping people stand on their feet. In his free time, he likes to explore nature by
doing outdoor activities, including hiking, trekking, and nature photography.
Daniel Durante is an avid coffee drinker, motorcyclist, and rugby player. He has
been programming since he was 12 years old. He has been mostly involved with web
development from PHP-to-Golang while using PostgreSQL as his main choice of
data storage.
He has worked on text-based browser games that have reached over 1,000,000
players, created bin-packing software for CNC machines, and helped contribute
to one of the oldest ORMs of Node.js.
Vinit Kumar is an autodidact engineer who cares about writing beautiful code that
scales well.
Vinit is an active member of the free and open source software community and has
contributed to many projects, including Node.js, Python, and Django.
These days, he writes a lot of Django code along with frontend work on backbone
layers. He also works closely with the mobile team (iOS and Android) to ensure that
they get proper APIs and documentation support to get their job done.
He also helps his team write good maintainable code by doing code reviews
and following good practices such as Git, documentation, and tooling.
Jean Lazarou started spending time with computers at the age of 15.
He has worked in various sectors, such as the medical industry, the manufacturing
industry, university education, and the multimedia world.
He mainly uses Basic, C/C++, Java, and Ruby to develop fat clients, web
applications, frameworks, tools, and compilers, often involving databases.
He has published his personal works on GitHub and some technical articles
on his blog.
Ľuboš Medovarský is an entrepreneur and open source C/C++, Pascal, Python,
and Java software developer with experience in GNU/Linux and OpenBSD
administration, configuration management, monitoring, networking, firewalls,
and embedded systems.
Discontented with today's fragmented and broken state of home automation and
the Internet of Things, he has developed hardware and software for the Whistler
automation smart house project, which aims to disrupt the market with platform
unification, privacy by design, device autonomy, built-in artificial intelligence,
and security – all in open source packages and affordable for the masses. Accelera
Networks s.r.o., the company he founded in 2006, develops custom software and
hardware applications as well as provides IT management services. Previously, he
was employed with Alcatel, Hewlett-Packard, AT&T, Erste Group, and a handful of
smaller companies. When he's not at work, the trekkie inside him dreams of space
colonization and the technological advancement of humanity. His favorite outdoor
activities include biking and flying in a glider.
I would like to thank my wife, Izabela, and daughter, Zoja, for their
patience and understanding and for the joy of life in their proximity.
www.PacktPub.com
Did you know that Packt offers eBook versions of every book published, with PDF and ePub
files available? You can upgrade to the eBook version at www.PacktPub.com and as a print
book customer, you are entitled to a discount on the eBook copy. Get in touch with us at
[email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a
range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
TM
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book
library. Here, you can search, access, and read Packt's entire library of books.
Why subscribe?
• Fully searchable across every book published by Packt
• Copy and paste, print, and bookmark content
• On demand and accessible via a web browser
[ ii ]
Table of Contents
[ iii ]
Table of Contents
[ iv ]
Table of Contents
[v]
Table of Contents
[ vi ]
Table of Contents
[ vii ]
Preface
PostgreSQL is the world's most advanced community-driven open source database.
The first open source version of PostgreSQL was released on 1st August 1996, an
combined effort between Bruce Momjian and Vadim B. Mikheev. Since then, major
releases have come annually, and all releases are available under its free and open
source software PostgreSQL license similar to the BSD and MIT licenses. Modern
technologies are emerging with new features on a regular basis, and PostgreSQL is
one of the fantastic examples of this happening, adding more robust features to cope
with the changing trends of technology. Developer and database administrators love
to use PostgreSQL because of its reliability, scalability, and continuous support from
the open source community.
The main objective of this book is to teach you in programming database applications
and custom programmatic functions. It is a practical tutorial book with an emphasis
to provide authentic world examples of how applications can be programmed with
PostgreSQL and grips on core development concepts and functions. By the end of
this book, we will show you how to write custom programming functions, which
extends the PostgreSQL database beyond its core capabilities. We wish you the best
of luck on your quest of seeking knowledge of PostgreSQL, where we hope that at
the end of this book, you will feel like you deserve a pat on the back for your efforts
in acquiring some hands-on expertise with PostgreSQL.
Preface
Chapter 3, Working with Indexes, is all about indexes, so expect to see a discussion
of the fundamental concepts of indexes, such as the kinds of indexes PostgreSQL
supports and the syntax to create them. The main story of this chapter is where to
utilize what kind of index and which condition it is best suited for. You can then
build different kinds of indexes in the warehouse database to explicate the practical
use of indexes.
Chapter 4, Triggers, Rules, and Views, consists of three sections: triggers, rules, and
views. The first section of this chapter will explain what a trigger is and how to
create triggers in PostgreSQL. The second part will deal with PostgreSQL rules.
There will be a focus on how the rules work by explaining their call, input, and the
results. The final third part will revolve around views and why they are important in
database design.
Chapter 5, Window Functions, discusses the power and concepts of window functions
in conjunction with aggregate functions. We will also cover the scope, structure,
and usage of window functions with examples. Another objective will be to acquire
a crystal clear understanding of the core of window functions and the data that is
processed with the help of frame, OVER, PARTITION BY, and ORDER BY clauses.
This chapter will also discuss the available built-in window functions, along with
custom ones.
[2]
Preface
Chapter 8, Dealing with Large Objects, is about the handling of Large Objects (LO)
as there is a need to store large objects such as audio and video files. PostgreSQL
has support to manipulate these objects. The handling of sizably huge objects is
consummately different from the other objects such as text, varchar, and int. This
chapter will explain why we need to store Large Objects (LO) and how PostgreSQL
implements LO storage.
Chapter 10, Embedded SQL in C – ECPG, covers all the syntax and utilization of
Embedded SQL to manipulate the data inside this code. Other than libpq, there
is an alternative to communicate in C code to a PostgreSQL server called ECPG.
Additionally, there will be coverage of how to compile the ECPG program, and
we will discuss the command-line options of the ECPG binary.
Chapter 11, Foreign Data Wrapper, covers how to explain the building blocks of the
foreign data wrapper and discusses how to utilize postgres_fdw and file_fdw to
manipulate foreign data. PostgreSQL introduces an incipient feature called the
foreign data wrapper. It's a template to write the module to access foreign data.
This is rigorously based in SQL/MED standards (SQL Management of External
Data). There are only two community maintained wrappers, postgres_fdw and
file_fdw, along with many externally maintained foreign data wrappers.
Chapter 12, Extensions, covers how to install and use available extensions in
PostgreSQL. PostgreSQL has features to install the loadable modules called
extensions. Instead of creating a bunch of objects by running SQL queries, an
extension, which is a collection of objects, can be created and dropped using
a single command. The main advantage of an extension is maintainability.
There are several extensions available.
[3]
Preface
Conventions
In this book, you will find a number of styles of text that distinguish between
different kinds of information. Here are some examples of these styles, and an
explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions,
pathnames, dummy URLs, user input, and Twitter handles are shown as follows:
"With the ALTER TABLE command, we can add, remove, or rename table columns."
[4]
Preface
New terms and important words are shown in bold. Words that you see on the
screen, in menus or dialog boxes for example, appear in the text like this: "The
team added core object-oriented features in Ingres and named the new version
PostgreSQL."
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about
this book—what you liked or may have disliked. Reader feedback is important for us
to develop titles that you really get the most out of.
[5]
Preface
If there is a topic that you have expertise in and you are interested in either writing
or contributing to a book, see our author guide on www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things
to help you to get the most from your purchase.
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes
do happen. If you find a mistake in one of our books—maybe a mistake in the text or
the code—we would be grateful if you would report this to us. By doing so, you can
save other readers from frustration and help us improve subsequent versions of this
book. If you find any errata, please report them by visiting http://www.packtpub.
com/submit-errata, selecting your book, clicking on the errata submission form link,
and entering the details of your errata. Once your errata are verified, your submission
will be accepted and the errata will be uploaded on our website, or added to any list of
existing errata, under the Errata section of that title. Any existing errata can be viewed
by selecting your title from http://www.packtpub.com/support.
Piracy
Piracy of copyright material on the Internet is an ongoing problem across all media.
At Packt, we take the protection of our copyright and licenses very seriously. If you
come across any illegal copies of our works, in any form, on the Internet, please
provide us with the location address or website name immediately so that we can
pursue a remedy.
We appreciate your help in protecting our authors, and our ability to bring you
valuable content.
[6]
Preface
Questions
You can contact us at [email protected] if you are having a problem with
any aspect of the book, and we will do our best to address it.
[7]
Getting Started with
PostgreSQL
Before starting our journey with SQL, allow me to quickly go through the history of
PostgreSQL. It all starts from the University of California, Berkeley, in the late 1970s
with the aim of developing a relational database possessing object-oriented features.
They named it Ingres. Later on, around the mid 1980s, a team of core developers led
by Michael Stonebraker from the University of California started work on Ingres.
The team added core object-oriented features in Ingres and named the new version
PostgreSQL.
This team was attached to the development of PostgreSQL for around 8 years.
During this time, they introduced object-oriented concepts, procedures, rules,
indexes, and types. In 1994, Andrew Yu and Jolly Chen replaced the Ingres-
based query language with the SQL query language. After this change, in 1995,
PostgreSQL was renamed Postgres95. In 1996, after entering the open source
world, Postgres95 went through multiple changes and new features such as Multi
Version Concurrency Control (MVCC), and built-in types were added. Over a
period of time, following the addition of new features and with the devoted work
of developers, Postgres95 achieved consistency and uniformity in code. They finally
renamed Postgres95 to PostgreSQL.
From the early releases of PostgreSQL (from version 6.0 that is), many changes have
been made, with each new major version adding new and more advanced features.
The current version is PostgreSQL 9.4 and is available from several sources and in
various binary formats.
1. We are assuming here that you have successfully installed PostgreSQL and
faced no issues. Now, you will need to connect with the default database that
is created by the PostgreSQL installer. To do this, navigate to the default path
of installation, which is /opt/PostgreSQL/9.4/bin from your command
line, and execute the following command that will prompt for a postgres
user password that you provided during the installation:
/opt/PostgreSQL/9.4/bin$./psql -U postgres
Password for user postgres:
[ 10 ]
Chapter 1
2. Using the following command, you can log in to the default database
with the user postgres and you will be able to see the following on your
command line:
psql (9.4beta1)
Type "help" for help
postgres=#
3. You can then create a new database called warehouse_db using the following
statement in the terminal:
postgres=# CREATE DATABASE warehouse_db;
4. You can then connect with the warehouse_db database using the
following command:
postgres=# \c warehouse_db
5. You are now connected to the warehouse_db database as the user postgres,
and you will have the following warehouse_db shell:
warehouse_db=#
Let's summarize what we have achieved so far. We are now able to connect with the
default database postgres and created a warehouse_db database successfully. It's
now time to actually write queries using psql and perform some Data Definition
Language (DDL) and Data Manipulation Language (DML) operations, which we
will cover in the following sections.
In PostgreSQL, we can have multiple databases. Inside the databases, we can have
multiple extensions and schemas. Inside each schema, we can have database objects
such as tables, views, sequences, procedures, and functions.
We are first going to create a schema named record and then we will create some
tables in this schema. To create a schema named record in the warehouse_db
database, use the following statement:
warehouse_db=# CREATE SCHEMA record;
[ 11 ]
Getting Started with PostgreSQL
Creating tables
Now, let's perform some DDL operations starting with creating tables. To create
a table named warehouse_tbl, execute the following statements:
warehouse_db=# CREATE TABLE warehouse_tbl
(
warehouse_id INTEGER NOT NULL,
warehouse_name TEXT NOT NULL,
year_created INTEGER,
street_address TEXT,
city CHARACTER VARYING(100),
state CHARACTER VARYING(2),
zip CHARACTER VARYING(10),
CONSTRAINT "PRIM_KEY" PRIMARY KEY (warehouse_id)
);
The preceding statements created the table warehouse_tbl that has the primary key
warehouse_id. Now, as you are familiar with the table creation syntax, let's create a
sequence and use that in a table. You can create the hist_id_seq sequence using the
following statement:
warehouse_db=# CREATE SEQUENCE hist_id_seq;
[ 12 ]
Chapter 1
The preceding query will create a history table in the warehouse_db database,
and the history_id column uses the sequence as the default input value.
In this section, we successfully learned how to create a table and also learned how
to use a sequence inside the table creation syntax.
Altering tables
Now that we have learned how to create multiple tables, we can practice some
ALTER TABLE commands by following this section. With the ALTER TABLE command,
we can add, remove, or rename table columns.
Firstly, with the help of the following example, we will be able to add the phone_no
column in the previously created table warehouse_tbl:
warehouse_db=# ALTER TABLE warehouse_tbl
ADD COLUMN phone_no INTEGER;
We can then verify that a column is added in the table by describing the table
as follows:
warehouse_db=# \d warehouse_tbl
Table "public.warehouse_tbl"
Column | Type | Modifiers
----------------+------------------------+-----------
warehouse_id | integer | not null
warehouse_name | text | not null
year_created | integer |
street_address | text |
city | character varying(100) |
state | character varying(2) |
zip | character varying(10) |
phone_no | integer |
Indexes:
"PRIM_KEY" PRIMARY KEY, btree (warehouse_id)
Referenced by:
[ 13 ]
Getting Started with PostgreSQL
We can then finally verify that the column has been removed from the table by
describing the table again as follows:
warehouse_db=# \d warehouse_tbl
Table "public.warehouse_tbl"
Column | Type | Modifiers
----------------+------------------------+-----------
warehouse_id | integer | not null
warehouse_name | text | not null
year_created | integer |
street_address | text |
city | character varying(100) |
state | character varying(2) |
zip | character varying(10) |
Indexes:
"PRIM_KEY" PRIMARY KEY, btree (warehouse_id)
Referenced by:
TABLE "history" CONSTRAINT "FORN_KEY" FOREIGN KEY
(warehouse_id) REFERENCES warehouse_tbl(warehouse_id) TABLE
"history" CONSTRAINT "FORN_KEY" FOREIGN KEY (warehouse_id)
REFERENCES warehouse_tbl(warehouse_id)
Truncating tables
The TRUNCATE command is used to remove all rows from a table without providing
any criteria. In the case of the DELETE command, the user has to provide the delete
criteria using the WHERE clause. To truncate data from the table, we can use the
following statement:
warehouse_db=# TRUNCATE TABLE warehouse_tbl;
We can then verify that the warehouse_tbl table has been truncated by performing
a SELECT COUNT(*) query on it using the following statement:
warehouse_db=# SELECT COUNT(*) FROM warehouse_tbl;
count
-------
0
(1 row)
[ 14 ]
Chapter 1
Inserting data
So far, we have learned how to create and alter a table. Now it's time to play around
with some data. Let's start by inserting records in the warehouse_tbl table using the
following command snippet:
warehouse_db=# INSERT INTO warehouse_tbl
(
warehouse_id,
warehouse_name,
year_created,
street_address,
city,
state,
zip
)
VALUES
(
1,
'Mark Corp',
2009,
'207-F Main Service Road East',
'New London',
'CT',
4321
);
We can then verify that the record has been inserted by performing a SELECT query
on the warehouse_tbl table as follows:
warehouse_db=# SELECT warehouse_id, warehouse_name, street_address
FROM warehouse_tbl;
warehouse_id | warehouse_name | street_address
---------------+----------------+-------------------------------
1 | Mark Corp | 207-F Main Service Road East
(1 row)
[ 15 ]
Getting Started with PostgreSQL
Updating data
Once we have inserted data in our table, we should know how to update it. This can
be done using the following statement:
warehouse_db=# UPDATE warehouse_tbl
SET year_created=2010
WHERE year_created=2009;
To verify that a record is updated, let's perform a SELECT query on the warehouse_
tbl table as follows:
Deleting data
To delete data from a table, we can use the DELETE command. Let's add a few records
to the table and then later on delete data on the basis of certain conditions:
warehouse_db=# INSERT INTO warehouse_tbl
(
warehouse_id,
warehouse_name,
year_created,
street_address,
city,
state,
zip
)
VALUES
(
2,
'Bill & Co',
2014,
'Lilly Road',
'New London',
'CT',
4321
);
warehouse_db=# INSERT INTO warehouse_tbl
[ 16 ]
Chapter 1
(
warehouse_id,
warehouse_name,
year_created,
street_address,
city,
state,
zip
)
VALUES
(
3,
'West point',
2013,
'Down Town',
'New London',
'CT',
4321
);
We can then delete data from the warehouse.tbl table, where warehouse_name is
Bill & Co, by executing the following statement:
To verify that a record has been deleted, we will execute the following SELECT query:
warehouse_db=# SELECT warehouse_id, warehouse_name
FROM warehouse_tbl
WHERE warehouse_name='Bill & Co';
warehouse_id | warehouse_name
--------------+----------------
(0 rows)
[ 17 ]
Getting Started with PostgreSQL
[ 18 ]
Chapter 1
[ 19 ]
Getting Started with PostgreSQL
• timestamp [(p)][without time zone]: This is used to store the date and
time without a time zone.
• timestamp [(p)] with time zone: This is used to store date and time with
a time zone.
• time, timestamp, and interval: These data types accept an optional precision
value p that specifies the number of fractional digits retained in the seconds
field. By default, there is no explicit bound on precision. The allowed range of
p is from 0 to 6 for the timestamp and interval types.
• Tsquery: This is used to store a text search query.
• tsvector: This is used to store a text search document.
• txid_snapshot: This is used to store the user-level transaction ID snapshots.
• uuid: This is used to store universally unique identifiers.
• xml: This data type is served as storage for XML data.
Logical operators
Logical operators are available in PostgreSQL, and these are:
• AND
• OR
• NOT
In PostgreSQL, the values of true, false, and null are used as the valued logic
system. For more detail, see the following truth table that shows how data types
a and b can result in different values when combined with the different AND and OR
logical operators:
a b a AND b a OR b
TRUE TRUE TRUE TRUE
TRUE FALSE FALSE TRUE
TRUE NULL NULL TRUE
FALSE FALSE FALSE FALSE
FALSE NULL FALSE NULL
NULL NULL NULL NULL
[ 20 ]
Chapter 1
You can then see from the following truth table how data type a can result in a
different value when used with the NOT logical operator:
a NOT a
TRUE FALSE
FALSE TRUE
NULL NULL
Comparison operators
In PostgreSQL, we have the following comparison operators, as shown in the
following table:
Operator Description
< Less than
> Greater than
<= Less than or equal to
>= Greater than or equal
to
= Equal
<> or != Not equal
Mathematical operators
PostgreSQL also provides you with the following mathematical operators, as you can
see in the following table:
Operator Description
+ Addition
- Subtraction
* Multiplication
/ Division
% Modulo (Remainder)
^ Exponentiation
|/ Square root
||/ Cube root
! Factorial
!! Factorial (prefix operator)
[ 21 ]
Getting Started with PostgreSQL
Operator Description
@ Absolute value
& Bitwise AND
| Bitwise OR
# Bitwise XOR
~ Bitwise NOT
<< Bitwise shift left
>> Bitwise shift right
Apart from the logical, comparison, and mathematical operators, PostgreSQL also
has operators for strings, binary strings, bit strings, date/time, geometric, network
address, and text search. Details of these operators are beyond the scope of this book
and can be studied in more detail in the PostgreSQL documentation available at
http://www.postgresql.org/docs/9.4/static/functions-string.html.
Constraints in PostgreSQL
PostgreSQL offers support for constraints and has coverage of multiple-level
constraints. Constraints are used to enforce rules on data insertion in tables.
Only data that complies with the constraint rules is allowed to be added to
the table. The constraints present in PostgreSQL are:
• Unique constraints
• Not-null constraints
• Exclusion constrains
• Primary key constraints
• Foreign key constraints
• Check constraints
We will explain all of these constraints one by one with supportive examples.
Let's start with the unique constraints.
Unique constraints
A unique constraint is a constraint that at the time of an insertion operation makes
sure that data present in a column (or a group of columns) is unique with regard
to all rows already present in the table. Let's create a few tables using unique
constraints in the following manner:
[ 22 ]
Chapter 1
Alternatively, the same constraint can be declared at the end of all columns.
For instance, this can look like the following:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER,
tool_name TEXT,
tool_class NUMERIC,
UNIQUE (tool_id)
);
When defining the unique constraints for a group of columns, all columns must be
listed separately using commas. Consider the following example:
warehouse_db=# CREATE TABLE cards
(
card_id INTEGER,
owner_number INTEGER,
owner_name TEXT,
UNIQUE (card_id, owner_number)
);
The preceding query will create the cards table with a unique constraint
implemented on the card_id and owner_number columns. Note that the unique
constraint is not applicable on null values. This means that in the cards table, two
records can have the same record if they have card_id and owner_number as null.
Not-null constraints
A not-null constraint makes sure that a column must have some values and a value
is not left as null. Drop the previously created tools table and create the tools table
again using this constraint using the following example:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER NOT NULL,
tool_name TEXT,
tool_class NUMERIC
);
[ 23 ]
Getting Started with PostgreSQL
The preceding query will create a table with a not-null constraint on the tool_id
column. We can apply the not-null constraint on as many columns as we can.
Consider the following example:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER NOT NULL,
tool_name TEXT NOT NULL,
tool_class NUMERIC
);
The preceding query will create the tools table with not-null constraints on tool_id
and tool_name.
Exclusion constraints
An exclusion constraint is used when comparing two rows on nominative columns
or expressions using the nominative operators. The result of the comparison will be
false or null. Consider the following example in which the conflicting tuple is given
the AND operation together:
warehouse_db=# CREATE TABLE movies
(
Title TEXT,
Copies INTEGER
);
We will create an exclusion constraint above the ALTER TABLE command. The
conditions for a conflicting tuple are AND together. Now, in order for two records to
conflict, we'll use the following:
record1.title = record2.title AND record1.copies = record2.copies.
[ 24 ]
Chapter 1
You can also create a primary key constraint based on two columns. Consider the
following example:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER,
tool_name TEXT,
tool_class NUMERIC,
PRIMARY KEY (tool_id, tool_name)
);
In the preceding query, we created a table with the name of tools_list that has
a foreign key on the tool_id column with the tool_id reference column from the
tools table.
[ 25 ]
Getting Started with PostgreSQL
A table can have multiple parent tables, which means that we can
have more than one foreign key in a single table.
Check constraints
A check constraint lets you define a condition that a column must fulfill a Boolean
expression. Let's understand this with some examples:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER PRIMARY KEY,
tool_name TEXT,
tool_class NUMERIC,
tool_quantity NUMERIC CHECK (tool_quantity > 0)
);
You can also give your constraints a more user-friendly name, so see the following
example in which we name the constraint positive_quantity:
warehouse_db=# CREATE TABLE tools
(
tool_id INTEGER PRIMARY KEY,
tool_name TEXT,
tool_class NUMERIC,
tool_quantity NUMERIC
CONSTRAINT positive_quantity CHECK
(tool_quantity>0)
);
Privileges in PostgreSQL
In PostgreSQL, multiple privileges are present for every object that is created.
By default, the owner (or a superuser) of an object has all the privileges on it.
In PostgreSQL, the following types of privileges are present:
• SELECT
• INSERT
• UPDATE
[ 26 ]
Discovering Diverse Content Through
Random Scribd Documents
African tribes, their native industry and enterprise yet almost
undisturbed by the busy excitement of civilization. Hence there may
fairly be drawn something like a sample of the real African native
character and condition. They live in families; among them the family
tie and the rights of property are regarded; conscience pronounces
criminal and offensive the same irregularities as are so regarded
among civilized peoples; in stature and physical condition they come
up to the best standards. I argue that the life and condition which
presents this state of things after isolation for thousands of years
from all we call civilized can scarcely be called evil or degraded.
Set like sparkling jewels in its crevices and depressions, the great
lakes send forth the streams which, flowing through gaps in their
surrounding mountain barriers, rushing through narrow channels,
oozing slowly through elevated flats or bounding in beautiful
cascades over steep steps, and carrying the vitalizing fluid in every
direction through the length and breadth of Africa, form its system of
circulation.
Bordering the great lakes and clustering on the slopes, forests of
gigantic trees form the flesh and muscle of this great creation;
preserved in perpetual verdure wherever water constantly remains
and in long extending lines and network fringing the ever-winding
banks of the streams, and finally joining with the verdant belt of the
sea-coast to form the brilliant epidermis of the whole, and forming
background and filling to the network of these prominent features, in
broad concentric curves and in belts and patches, the more stunted
thorny growth, long grass, broad savanna and sandy plain, ever
changing in color and aspect.
The great new and beautiful world of Africa lies open before us;
250,000,000 intelligent and courageous people have become exposed
to the influence, for good or evil, of the civilized races. What shall we
do with it and them? Quite possible is it fairly and honestly so to
explore and deal with both country and people as to develop its
resources and benefit them, while adding to the world's treasury of
comfort-bringing products and human brotherhood the riches and the
friendship of a new continent; but it must be by peaceful and just
measures and by honest trade with wholesome wares.
II.
To make not only our progress sure, but work and residence at our
destination safe and possible in such a land, we had stores of
groceries, medicines, tools and clothing, and a large quantity of calico
and other cloth, which forms the currency of the country, for the
purchase of supplies and payment of wages to porters, servants and
workmen.
The mode of travel was walking, except when now and then an
invalid was carried in a hammock. The method of transport was by
means of native porters, hundreds of whom devote themselves to
this work. They are paid $5 per month as wages, payable at Zanzibar
on their return to the coast, less such advance in kind as they may
draw from their leader along the road. In addition, they get a regular
allowance of two yards of white calico per seven days, each man, as
barter with which to obtain food.
The organization and start of such a party took some time, and
parties of from 100 to 300 were dispatched along the road as things
were ready, until, when I started with the final rear guard, we had on
the road over 900 of these porters, with their headmen and petty
officers, all under complete organization.
The first start of the boat-section carts was the scene of apparent
disaster. The men, wild with excitement and uniting their shouts with
those of onlookers, were beyond all restraint for the moment, and as
they rounded a sharp turn to get out of the village of Saadani, over
went the carts, one after the other, on their sides; and it was some
time before I could train the men to steer more carefully or to move
gently down a declivity. In time, however, the whole thing worked
well. The fore compartment of the boat, going stem first, often forced
its own way through masses of brush and creeper, helping to clear
the way for the narrower sections, whose carts insinuated themselves
through surprisingly small gaps. The men themselves were most
zealous in the service, and as we emerged from lengthy stretches of
jungle, ascended steep river banks, or jolted whole days over rugged
stony places unharmed, we made up our minds that, these carts
would "go anywhere." In twenty days we reached Upwapwa, 200
miles from the coast, and joined an advance party awaiting us; and
after a few days rest and reorganization, we started once more
westward.
The first village beyond, in the country of Ugogo, was thirty miles off.
The first day was a comparatively easy march to a watering place,
but the next two days gave us tough work. The thick, tangled, thorny
scrub became quite dense, and for those two days we had to cut our
way through it foot by foot. Hour after hour the twang of the sword-
bayonets and the thud of the axes were almost the only sounds to be
heard till the train of carts moved slowly on as the way was opened.
Toward evening of the second day we followed a narrow pass along
the side of a rocky river bed, stout, inflexible trunks and branches
here projecting into our path. On some of these ebony bars the axes
resounded as on an anvil, and they yielded only to the more patient
saw. As the sun descended we began to flag, but help was at hand;
for a party coming back to us from the camp ahead with food and
water, we picked up strength and spirit and reached camp late in the
evening.
The level plains of Ugogo, which here represent the flat, open step or
terrace to which I have referred in the general description of Africa,
enabled us to make a week or so of splendid and comfortable
marches. Ugogo passed, there lay before us the much-dreaded
wilderness, so-called, of the Magunda-Mkali, separated from Ugogo
by a steep, rocky ascent, which we could only tackle one cart at a
time, and we soon came to a point so rugged with broken rocks that
we could proceed no further; but the sections were unlashed, the
carts taken to pieces, and all handed or dragged across the difficult
place and put together again beyond. Over the scrubby, rugged hill
and dale of Magunda-Mkali, without inhabitants, 20 to 25 miles a day
was often made; every man knew the necessity of pushing on for
food and water, and the danger, from wild beasts or wandering
highwaymen, of lagging in the rear.
On, on, went the novel train, through weary miles of forest, across
the scorched plain, rattling over the hard sun-baked footprints of the
elephant and rhinoceros; on through grassy glades where the nimble
antelope bounded, scared out of our path, and the zebra and giraffe
were startled by the rattling of these strange disturbers of their
solitude; on still, through miles of swamp, with its croaking legions;
on through scenes of surpassing beauty, bright flowers and gleaming
birds and butterflies; on past the bleaching bones of other travelers
waylaid or exhausted, till the sun creeps up high overhead and eager
glances are cast at green spots where water once had been; on, till
the pace grows slow with weariness and thirst, and still on, till it
revives again as the welcome messenger from the front appears in
sight with water or the camp-fires tell of food and rest.
The rains were now at hand and the country rich and verdant; we
hastened on with all speed possible to enable us to cross the
Malagarasi river before it should be too swollen. Emerging from
elevated forest land to a view of the valley of the river, it appears like
a vast level expanse of harmless grass, but the swift river is flowing
in the bottom. The toll required by the natives being paid, we
descended to the river through the thick grass. We crossed the river
in tiny dug-out or bark canoes managed by the natives. One old man,
a leader among these ferrymen, we had especial cause to notice; we
called him "the old admiral." He wore a curious skull cap apparently
made of bladder, and presented a most odd appearance. To him we
paid a special fee of propitiation for the boatmen. As we proceeded
down toward the river the first sign of it among the long grass was
quiet shallow water on the path; this grew deeper and deeper as we
walked on until we were immersed to the armpits, the grass rising
avenue-like overhead. We emerged upon a small island or rising
ground, and the river proper was before us. On this little eminence
stood "the old admiral" superintending all. The porters and their
ordinary loads all crossed in the usual way, two or three at a time in
the little canoes. The two large carts, with the bow and stern
compartments of the boats, were floated along the watery avenue by
the buoyancy of their tank-like loads; the others came, sections and
carts, separately. The fare for each load was one yard of calico, but
when the carts appeared there was general astonishment among the
ferrymen, who showed signs of clearing off altogether; "the old
admiral" alone was unmoved; his stolid countenance showed no sign,
but a deep bass growl, "Eight yards, eight yards for these!" expressed
at once his nonchalance and his determination; and eight yards we
had to pay. All was safely got over in a day. Two of the bark canoes
were lashed together with poles across, and one section or one cart
at a time laid on top, and thus all was carried across.
Ujiji was now only a few marches ahead. The view of the lake was
caught at last, a narrow strip of its waters gleaming in the sun in the
distance, and next morning we slowly marched into Ujiji in a compact
body. The boat was duly launched and has now been for years at
work on Lake Tanganyika in the cause of civilization and Christianity.
All the work I have described was done at the expense of the London
Missionary Society.
REPORT OF COMMITTEE ON EXPLORATION IN
ALASKA.
(Accepted April 3, 1891.)
In view of the fact that it is the purpose of the Coast and Geodetic
Survey to carry the international boundary survey into this region
within one or two years, it is considered inexpedient for the Society
to undertake extended topographic work. It is, however, submitted,
as a principle which this Society should emphasize in projecting
exploration, that facts of physical geography have minimum value
and may lead to false conclusions unless correlated through their
space relations; and it is recommended that the expedition aim
always to employ such means as may be practicable for making
record of its course and of its observations in approximate geometric
relation to surroundings.
Very respectfully,
G. K. GILBERT,
EVERETT HAYDEN,
WILLARD D. JOHNSON,
Committee on Exploration.
NOTES.
The account of the organization and methods by which the map has
been produced is of special interest. The primary triangulation upon
which it is based is one of the most elaborate and accurate ever
executed in any country. No expense has been spared in this
direction. Within this triangulation is a secondary triangulation, also
very elaborate, from the stations of which numerous additional
points are cut in, or located by unclosed triangles. All this work is of
the highest order of excellence, being infinitely more accurate than
the map requires. With this, however, the accuracy appears to end.
The detail consists of the map, or the map proper, little more than a
compilation of commune cadastral plans. These were fitted to the
triangulation points and to one another, a process which appears to
have been by no means easy of satisfactory accomplishment. This
adjustment having been completed, the culture was brought up to
date of survey and a survey was made of the relief features by the
use of such inferior instruments as the clinometer compass and
chain.
The principal and obvious criticism upon such work is that it is top-
heavy. The triangulation is far more elaborate than is required, while
the provision for making the map itself is by no means comparable
with it: it is as far below the requirements of the scale as the
triangulation is above it.
The weak features of maps are generally the details, the part of the
work that, strange to say, is usually relegated to the lowest grade of
professional men. This weakness consists in an insufficiency of minor
locations for the control of the sketch and in unfaithful sketching. It
is the sketching that requires the most careful attention and the best
and most experienced men. The instrumental portion of the work is
the least difficult; the artistic portion, or sketching, is the most
difficult. It would seem more logical and would doubtless produce
better results to reverse the usual order of promotion and place the
topographer above the triangulator. Moreover, the triangulation
should be regarded as merely a means for the correction of the
sketching, and it should be required only that it be of sufficiently
high grade to meet this condition. The minor locations should be
sufficiently numerous and well distributed to fully control and correct
the sketching; and finally the sketching should be as faithful a
representation of the topography as is consistent with the necessary
generalization of the surface features.
H. G.
Six days' journey from Moscow brought the party through Russia
and Turkestan to Kouldja (45° N., 41° W.), in extreme western
Mongolia. Having obtained authority from the Chinese governor of
the province to proceed, the party, aggregating 15 in number, left
that place September 12, 1889, with Batang, China, as an objective
point. On October 5, after a journey of about 450 miles, during
which they crossed the Thian-chan ("heavenly") mountains by Narat
pass, they camped at Korla, near Bagratch-koul. Here they were
warned that they could proceed no farther, and the governor of Ili
sent an order to arrest them. The mandarin and other local
authorities did not, however, actively oppose their departure, which
took place during the night of October 10, the party then consisting
of 20 horsemen and 40 pack-animals. On October 28 they reached
Kara-douran, the western end of Lob-nor. A side trip by d'Orleans
and Dedeken to Lob-nor proved it to be no longer a lake but a series
of swamps and sandy islands, with the water nowhere more than
four feet deep. Meantime Bonavolot accumulated supplies and
replaced from the hardy Mongols the more timid among their camp-
followers, the party being reduced to seven, with a few extra men
for a short distance.
A. W. G.
It appears that the total railroad mileage on June 30, 1890, was
163,597, an increase of 5,838 miles during the year. The increase
came mainly from southeastern and western states. This mileage
was owned by 1,797 distinct corporate bodies, but entirely controlled
in one way or another by only 747 companies. To illustrate the
extent to which consolidation of railroad property has gone, it may
be stated that 47.5 per cent of all railroad mileage is controlled by
but forty companies, and that 65.4 per cent is controlled by seventy-
five companies. The greatest mileage controlled by one company is
6,053, operated by the Southern Pacific company.
The total capital and bonded debt of railroad companies was
$9,871,378,389, or $60,340 per mile. Stock and bonds were about
equal in amount. Mr. Adams estimates the value of railroad property
by capitalizing at 5 per cent the dividends and interest on bonds paid
during the year, reaching as a result $6,627,461,140, or about 2/3 of
the nominal capital and bonded debt. The justice of this method may
fairly be questioned. A comparison of the ruling prices of dividend-
paying stocks with the rate per cent of the dividend shows that 5 per
cent stocks are above par and that 4 per cent stocks average nearly
par. Moreover, it is well known that many railroads are built and
operated, not for their own immediate earnings but to give value to
other property of the companies, notably to lands, from the sale or
lease of which the companies derive profits. Again, many railroads
are built, not for present but for future profits, after they shall have
induced settlement of their territory; and, furthermore, numerous
branch roads have been built as defensive measures to prevent
rivals from occupying territory; and in many cases earnings are used
in betterment of property instead of distributing it as dividends. In
all these cases the roads have value, although they are not paying
dividends.
Taking all these matters into account, it does not appear that the
railroad stocks of the country have, collectively, been watered to any
great extent, if by "watering" is meant expanding nominal values
above actual values.
H. G.
INDEX.
ADMIRALTY BAY, 56
ANTIQUITIES of Peru, 8
BAIE DE MONTI, 56
— named by La Pérouse, 60
HAENKE ISLAND, Condition of, when seen by Malaspina, 63, 64, 65, 97
—, Visit to, 96, 103
HANN, JULIUS, cited on cyclones, 42
INCAS of Peru, 8
ebookgate.com