Numpy-User-1 10 1
Numpy-User-1 10 1
Release 1.10.1
CONTENTS
Introduction
1.1 What is NumPy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Building and installing NumPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 How to find documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Numpy basics
2.1 Data types . . . . .
2.2 Array creation . . .
2.3 I/O with Numpy . .
2.4 Indexing . . . . . .
2.5 Broadcasting . . . .
2.6 Byte-swapping . . .
2.7 Structured arrays . .
2.8 Subclassing ndarray
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
7
9
9
11
13
20
26
29
31
36
Performance
Miscellaneous
4.1 IEEE 754 Floating Point Special Values . .
4.2 How numpy handles numerical exceptions
4.3 Examples . . . . . . . . . . . . . . . . . .
4.4 Interfacing to C . . . . . . . . . . . . . . .
4.5 Interfacing to Fortran: . . . . . . . . . . .
4.6 Interfacing to C++: . . . . . . . . . . . . .
4.7 Methods vs. Functions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
47
48
48
48
50
50
51
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
53
60
76
94
Index
45
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
103
ii
This guide is intended as an introductory overview of NumPy and explains how to install and make use of the most
important features of NumPy. For detailed reference documentation of the functions and classes contained in the
package, see the reference.
Warning: This User Guide is still a work in progress; some of the material is not organized, and several aspects
of NumPy are not yet covered sufficient detail. We are an open source community continually working to improve
the documentation and eagerly encourage interested parties to contribute. For information on how to do so, please
visit the NumPy doc wiki.
More documentation for NumPy can be found on the numpy.org website.
Thanks!
CONTENTS
CONTENTS
CHAPTER
ONE
INTRODUCTION
This produces the correct answer, but if a and b each contain millions of numbers, we will pay the price for the
inefficiencies of looping in Python. We could accomplish the same task much more quickly in C by writing (for clarity
we neglect variable declarations and initializations, memory allocation, etc.)
for (i = 0; i < rows; i++): {
c[i] = a[i]*b[i];
}
This saves all the overhead involved in interpreting the Python code and manipulating Python objects, but at the
expense of the benefits gained from coding in Python. Furthermore, the coding work required increases with the
dimensionality of our data. In the case of a 2-D array, for example, the C code (abridged as before) expands to
for (i = 0; i < rows; i++): {
for (j = 0; j < columns; j++): {
c[i][j] = a[i][j]*b[i][j];
}
}
NumPy gives us the best of both worlds: element-by-element operations are the default mode when an ndarray is
involved, but the element-by-element operation is speedily executed by pre-compiled C code. In NumPy
c = a * b
does what the earlier examples do, at near-C speeds, but with the code simplicity we expect from something based on
Python. Indeed, the NumPy idiom is even simpler! This last example illustrates two of NumPys features which are
the basis of much of its power: vectorization and broadcasting.
Vectorization describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place,
of course, just behind the scenes in optimized, pre-compiled C code. Vectorized code has many advantages, among
which are:
vectorized code is more concise and easier to read
fewer lines of code generally means fewer bugs
the code more closely resembles standard mathematical notation (making it easier, typically, to correctly code
mathematical constructs)
vectorization results in more Pythonic code. Without vectorization, our code would be littered with inefficient
and difficult to read for loops.
Broadcasting is the term used to describe the implicit element-by-element behavior of operations; generally speaking,
in NumPy all operations, not just arithmetic operations, but logical, bit-wise, functional, etc., behave in this implicit
element-by-element fashion, i.e., they broadcast. Moreover, in the example above, a and b could be multidimensional
arrays of the same shape, or a scalar and an array, or even two arrays of with different shapes, provided that the smaller
array is expandable to the shape of the larger in such a way that the resulting broadcast is unambiguous. For detailed
rules of broadcasting see numpy.doc.broadcasting.
NumPy fully supports an object-oriented approach, starting, once again, with ndarray. For example, ndarray is a
class, possessing numerous methods and attributes. Many of its methods mirror functions in the outer-most NumPy
namespace, giving the programmer complete freedom to code in whichever paradigm she prefers and/or which seems
most appropriate to the task at hand.
Chapter 1. Introduction
A lightweight alternative is to download the Python installer from www.python.org and the NumPy installer for your
Python version from the Sourceforge download site <http://sourceforge.net/projects/numpy/files/NumPy_.
The NumPy installer includes binaries for different CPUs (without SSE instructions, with SSE2 or with SSE3) and
installs the correct one automatically. If needed, this can be bypassed from the command line with
numpy-<1.y.z>-superpack-win32.exe /arch nosse
To perform an in-place build that can be run from the source folder run:
python setup.py build_ext --inplace
The NumPy build system uses distutils and numpy.distutils. setuptools is only used when building
via pip or with python setupegg.py. Using virtualenv should work as expected.
Note: for build instructions to do development work on NumPy itself, see :ref:development-environment.
Parallel builds
From NumPy 1.10.0 on its also possible to do a parallel build with:
python setup.py build -j 4 install --prefix $HOME/.local
This will compile numpy on 4 CPUs and install it into the specified prefix. to perform a parallel in-place build, run:
python setup.py build_ext --inplace -j 4
The number of build jobs can also be specified via the environment variable NPY_NUM_BUILD_JOBS.
FORTRAN ABI mismatch
The two most popular open source fortran compilers are g77 and gfortran. Unfortunately, they are not ABI compatible,
which means that concretely you should avoid mixing libraries built with one with another. In particular, if your
blas/lapack/atlas is built with g77, you must use g77 when building numpy and scipy; on the contrary, if your atlas
is built with gfortran, you must build numpy/scipy with gfortran. This applies for most other cases where different
FORTRAN compilers might have been used.
Choosing the fortran compiler
To build with g77:
python setup.py build --fcompiler=gnu
Chapter 1. Introduction
Chapter 1. Introduction
CHAPTER
TWO
NUMPY BASICS
Description
Boolean (True or False) stored as a byte
Default integer type (same as C long; normally either int64 or int32)
Identical to C int (normally int32 or int64)
Integer used for indexing (same as C ssize_t; normally either int32 or int64)
Byte (-128 to 127)
Integer (-32768 to 32767)
Integer (-2147483648 to 2147483647)
Integer (-9223372036854775808 to 9223372036854775807)
Unsigned integer (0 to 255)
Unsigned integer (0 to 65535)
Unsigned integer (0 to 4294967295)
Unsigned integer (0 to 18446744073709551615)
Shorthand for float64.
Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
Single precision float: sign bit, 8 bits exponent, 23 bits mantissa
Double precision float: sign bit, 11 bits exponent, 52 bits mantissa
Shorthand for complex128.
Complex number, represented by two 32-bit floats (real and imaginary components)
Complex number, represented by two 64-bit floats (real and imaginary components)
Additionally to intc the platform dependent C integer types short, long, longlong and their unsigned versions
are defined.
Numpy numerical types are instances of dtype (data-type) objects, each having unique characteristics. Once you
have imported NumPy using
Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages
such as Numeric. Some documentation may still refer to these, for example:
>>> np.array([1, 2, 3], dtype=f)
array([ 1., 2., 3.], dtype=float32)
Note that, above, we use the Python float object as a dtype. NumPy knows that int refers to np.int_, bool means
np.bool_, that float is np.float_ and complex is np.complex_. The other data-types do not have Python
equivalents.
To determine the type of an array, look at the dtype attribute:
>>> z.dtype
dtype(uint8)
dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also
be used indirectly to query properties of the type, such as whether it is an integer:
>>> d = np.dtype(int)
>>> d
dtype(int32)
>>> np.issubdtype(d, int)
True
10
2.2.1 Introduction
There are 5 general mechanisms for creating arrays:
1. Conversion from other Python structures (e.g., lists, tuples)
2. Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.)
3. Reading arrays from disk, either from standard or custom formats
4. Creating arrays from raw bytes through the use of strings or buffers
5. Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will
it cover creating object arrays or structured arrays. Both of those are covered in their own sections.
11
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the
arange docstring.
linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning
and end values. For example:
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8,
3.4,
4. ])
The advantage of this creation function is that one can guarantee the number of elements and the starting and end
point, which arange() generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description:
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on a regular grid.
12
Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by
libraries like PIL (able to read and write many image formats such as jpg, png, etc).
Common ASCII Formats
Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There
are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part
of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple
I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind
your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of
techniques though that certainly is much more work and requires significantly more advanced knowledge to interface
with C or C++.
Use of Special Libraries
There are libraries that can be used to generate arrays for special purposes and it isnt possible to enumerate all of
them. The most common uses are use of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g. diagonal).
13
Another common separator is "\t", the tabulation character. However, we are not limited to a single character, any
string will do. By default, genfromtxt assumes delimiter=None, meaning that the line is split along white
spaces (including tabs) and that consecutive white spaces are considered as a single white space.
Alternatively, we may be dealing with a fixed-width file, where columns are defined as a given number of characters.
In that case, we need to set delimiter to a single integer (if all the columns have the same size) or to a sequence of
integers (if columns can have different sizes):
>>> data = " 1 2 3\n 4 5 67\n890123 4"
>>> np.genfromtxt(StringIO(data), delimiter=3)
array([[
1.,
2.,
3.],
[
4.,
5.,
67.],
[ 890., 123.,
4.]])
>>> data = "123456789\n
4 7 9\n
4567 9"
>>> np.genfromtxt(StringIO(data), delimiter=(4, 3, 2))
array([[ 1234.,
567.,
89.],
[
4.,
7.,
9.],
[
4.,
567.,
9.]])
14
Note: There is one notable exception to this behavior: if the optional argument names=True, the first commented
line will be examined for names.
9.])
If the columns have names, we can also select which columns to import by giving their name to the usecols
argument, either as a sequence of strings or a comma-separated string:
>>> data = "1 2 3\n4 5 6"
>>> np.genfromtxt(StringIO(data),
15
...
names="a, b, c", usecols=("a", "c"))
array([(1.0, 3.0), (4.0, 6.0)],
dtype=[(a, <f8), (c, <f8)])
>>> np.genfromtxt(StringIO(data),
...
names="a, b, c", usecols=("a, c"))
array([(1.0, 3.0), (4.0, 6.0)],
dtype=[(a, <f8), (c, <f8)])
Another simpler possibility is to use the names keyword with a sequence of strings or a comma-separated string:
>>> data = StringIO("1 2 3\n 4 5 6")
>>> np.genfromtxt(data, names="A, B, C")
array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)],
dtype=[(A, <f8), (B, <f8), (C, <f8)])
16
In the example above, we used the fact that by default, dtype=float. By giving a sequence of names, we are
forcing the output to a structured dtype.
We may sometimes need to define the column names from the data itself. In that case, we must use the names
keyword with a value of True. The names will then be read from the first line (after the skip_header ones), even
if the line is commented out:
>>> data = StringIO("So it goes\n#a b c\n1 2 3\n 4 5 6")
>>> np.genfromtxt(data, skip_header=1, names=True)
array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)],
dtype=[(a, <f8), (b, <f8), (c, <f8)])
The default value of names is None. If we give any other value to the keyword, the new names will overwrite the
field names we may have defined with the dtype:
>>> data = StringIO("1 2 3\n 4 5 6")
>>> ndtype=[(a,int), (b, float), (c, int)]
>>> names = ["A", "B", "C"]
>>> np.genfromtxt(data, names=names, dtype=ndtype)
array([(1, 2.0, 3), (4, 5.0, 6)],
dtype=[(A, <i8), (B, <f8), (C, <i8)])
In the same way, if we dont give enough names to match the length of the dtype, the missing names will be defined
with this default template:
>>> data = StringIO("1 2 3\n 4 5 6")
>>> np.genfromtxt(data, dtype=(int, float, int), names="a")
array([(1, 2.0, 3), (4, 5.0, 6)],
dtype=[(a, <i8), (f0, <f8), (f1, <i8)])
We can overwrite this default with the defaultfmt argument, that takes any format string:
>>> data = StringIO("1 2 3\n 4 5 6")
>>> np.genfromtxt(data, dtype=(int, float, int), defaultfmt="var_%02i")
array([(1, 2.0, 3), (4, 5.0, 6)],
dtype=[(var_00, <i8), (var_01, <f8), (var_02, <i8)])
Note: We need to keep in mind that defaultfmt is used only if some names are expected but not defined.
Validating names
Numpy arrays with a structured dtype can also be viewed as recarray, where a field can be accessed as if it were an
attribute. For that reason, we may need to make sure that the field name doesnt contain any space or invalid character,
or that it does not correspond to the name of a standard attribute (like size or shape), which would confuse the
interpreter. genfromtxt accepts three optional arguments that provide a finer control on the names:
deletechars
Gives a string combining all the characters that must be deleted from the name. By default, invalid
characters are ~!@#$%^&*()-=+~\|]}[{;: /?.>,<.
2.3. I/O with Numpy
17
excludelist
Gives a list of the names to exclude, such as return, file, print... If one of the input name is
part of this list, an underscore character (_) will be appended to it.
case_sensitive
Whether the names should be case-sensitive (case_sensitive=True), converted to upper case (case_sensitive=False or case_sensitive=upper) or to lower case
(case_sensitive=lower).
Tweaking the conversion
The converters argument
Usually, defining a dtype is sufficient to define how the sequence of strings must be converted. However, some
additional control may sometimes be required. For example, we may want to make sure that a date in a format
YYYY/MM/DD is converted to a datetime object, or that a string like xx% is properly converted to a float between
0 and 1. In such cases, we should define conversion functions with the converters arguments.
The value of this argument is typically a dictionary with column indices or column names as keys and a conversion
functions as values. These conversion functions can either be actual functions or lambda functions. In any case, they
should accept only a string as input and output only a single element of the wanted type.
In the following example, the second column is converted from as string representing a percentage to a float between
0 and 1:
>>> convertfunc = lambda x: float(x.strip("%"))/100.
>>> data = "1, 2.3%, 45.\n6, 78.9%, 0"
>>> names = ("i", "p", "n")
>>> # General case .....
>>> np.genfromtxt(StringIO(data), delimiter=",", names=names)
array([(1.0, nan, 45.0), (6.0, nan, 0.0)],
dtype=[(i, <f8), (p, <f8), (n, <f8)])
We need to keep in mind that by default, dtype=float. A float is therefore expected for the second column.
However, the strings 2.3% and 78.9% cannot be converted to float and we end up having np.nan instead.
Lets now use a converter:
>>> # Converted case ...
>>> np.genfromtxt(StringIO(data), delimiter=",", names=names,
...
converters={1: convertfunc})
array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)],
dtype=[(i, <f8), (p, <f8), (n, <f8)])
The same results can be obtained by using the name of the second column ("p") as key instead of its index (1):
>>> # Using a name for the converter ...
>>> np.genfromtxt(StringIO(data), delimiter=",", names=names,
...
converters={"p": convertfunc})
array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)],
dtype=[(i, <f8), (p, <f8), (n, <f8)])
Converters can also be used to provide a default for missing entries. In the following example, the converter convert
transforms a stripped string into the corresponding float or into -999 if the string is empty. We need to explicitly strip
the string from white spaces as it is not done by default:
>>> data = "1, , 3\n 4, 5, 6"
>>> convert = lambda x: float(x.strip() or -999)
>>> np.genfromtxt(StringIO(data), delimiter=",",
...
converter={1: convert})
18
array([[
[
1., -999.,
4.,
5.,
3.],
6.]])
Default
False
-1
np.nan
np.nan+0j
???
We can get a finer control on the conversion of missing values with the filling_values optional argument. Like
missing_values, this argument accepts different kind of values:
a single value
This will be the default for all columns
a sequence of values
Each entry will be the default for the corresponding column
a dictionary
Each key can be a column index or a column name, and the corresponding value should be a single
object. We can use the special key None to define a default for all columns.
In the following example, we suppose that the missing values are flagged with "N/A" in the first column and by
"???" in the third column. We wish to transform these missing values to 0 if they occur in the first and second
column, and to -999 if they occur in the last column:
>>> data = "N/A, 2, 3\n4, ,???"
>>> kwargs = dict(delimiter=",",
...
dtype=int,
...
names="a,b,c",
...
missing_values={0:"N/A", b:" ", 2:"???"},
19
...
filling_values={0:0, b:0, 2:-999})
>>> np.genfromtxt(StringIO.StringIO(data), **kwargs)
array([(0, 2, 3), (4, 0, -999)],
dtype=[(a, <i8), (b, <i8), (c, <i8)])
usemask
We may also want to keep track of the occurrence of missing data by constructing a boolean mask, with True entries
where data was missing and False otherwise. To do that, we just have to set the optional argument usemask to
True (the default is False). The output array will then be a MaskedArray.
Shortcut functions
In addition to genfromtxt, the numpy.lib.io module provides several convenience functions derived from
genfromtxt. These functions work the same way as the original, but they have different default values.
ndfromtxt
Always set usemask=False. The output is always a standard numpy.ndarray.
mafromtxt
Always set usemask=True. The output is always a MaskedArray
recfromtxt
Returns a standard numpy.recarray (if usemask=False) or a MaskedRecords array (if
usemaske=True). The default dtype is dtype=None, meaning that the types of each column will be automatically determined.
recfromcsv
Like recfromtxt, but with a default delimiter=",".
2.4 Indexing
See also:
Indexing routines
Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing,
which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This
section is just an overview of the various options and issues related to indexing. Aside from single element indexing,
the details on most of these options are to be found in related sections.
20
>>> x = np.arange(10)
>>> x[2]
2
>>> x[-2]
8
Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that
it is not necessary to separate each dimensions index into its own set of square brackets.
>>> x.shape = (2,5) # now x is 2-dimensional
>>> x[1,3]
8
>>> x[1,-1]
9
Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array.
For example:
>>> x[0]
array([0, 1, 2, 3, 4])
That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above
example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned
is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but
points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is
returned. So using a single index on the returned array, results in a single element being returned. That is:
>>> x[0][2]
2
So note that x[0,2] = x[0][2] though the second case is more inefficient as a new temporary array is created
after the first index that is subsequently indexed by 2.
Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That
means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where
the first index represents the most rapidly changing location in memory. This difference represents a great potential
for confusion.
Note that slices of arrays do not copy the internal array data but also produce new views of the original data.
2.4. Indexing
21
It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays.
There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves
giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool
that allow one to avoid looping over individual elements in arrays and thus greatly improve performance.
It is possible to use special features to effectively increase the number of dimensions in an array through indexing so
the resulting array aquires the shape needed for use in an expression or with a specific function.
4,
3,
2])
The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index
array) where each index is replaced by the value the index array has in the array being indexed.
Negative values are permitted and work as they do with single indices or slices:
>>> x[np.array([3,3,-3,8])]
array([7, 7, 4, 2])
Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array,
but with the type and values of the array being indexed. As an example, we can use a multidimensional index array
instead:
>>> x[np.array([[1,1],[2,3]])]
array([[9, 9],
[8, 7]])
In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array
being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set
22
for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first
value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2].
If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot
be broadcast to the same shape, an exception is raised:
>>> y[np.array([0,2,4]), np.array([0,1])]
<type exceptions.ValueError>: shape mismatch: objects cannot be
broadcast to a single shape
The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the
scalar value is used for all the corresponding values of the index arrays:
>>> y[np.array([0,2,4]), 1]
array([ 1, 15, 29])
Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit
of thought to understand what happens in such cases. For example if we just use one index array with y:
>>> y[np.array([0,2,4])]
array([[ 0, 1, 2, 3, 4, 5, 6],
[14, 15, 16, 17, 18, 19, 20],
[28, 29, 30, 31, 32, 33, 34]])
What results is the construction of a new array where each value of the index array selects one row from the array
being indexed and the resultant array has the resulting shape (size of row, number index elements).
An example of where this may be useful is for a color lookup table where we want to map the values of an image into
RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with
shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will
result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location.
In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that
all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being
indexed.
Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in
the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are
always iterated and returned in row-major (C-style) order. The result is also identical to y[np.nonzero(b)]. As
with index arrays, what is returned is a copy of the data, not a view as one gets with slices.
The result will be multidimensional if y has more dimensions than b. For example:
>>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
array([False, False, False, True, True], dtype=bool)
>>> y[b[:,5]]
array([[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34]])
2.4. Indexing
23
Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array.
In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...],
which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result
is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions
of the array being indexed.
For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of
shape (2,3,5) results in a 2-D result of shape (4,5):
>>> x = np.arange(30).reshape(2,3,5)
>>> x
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
>>> b = np.array([[True, True, False], [False, True, True]])
>>> x[b]
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]])
For further details, consult the numpy reference documentation on array indexing.
In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
to produce a resultant array of shape (3,2).
Likewise, slicing can be combined with broadcasted boolean indices:
>>> y[b[:,5],1:3]
array([[22, 23],
[29, 30]])
y.shape
7)
y[:,np.newaxis,:].shape
1, 7)
Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to
combine two arrays in a way that otherwise would require explicitly reshaping operations. For example:
24
>>> x = np.arange(5)
>>> x[:,np.newaxis] + x[np.newaxis,:]
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example:
>>> z = np.arange(81).reshape(3,3,3,3)
>>> z[1,...,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even
exceptions (assigning complex to floats or ints):
>>> x[1] = 1.2
>>> x[1]
1
>>> x[1] = 1.2j
<type exceptions.TypeError>: cant convert complex to long; use
long(abs(z))
Unlike some of the references (such as array and mask indices) assignments are always made to the original data in
the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively
expect. This particular example is often surprising to people:
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
2.4. Indexing
25
Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The
reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then
the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of
the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times.
So one can use code to construct tuples of any number of indices and then use these within an index.
Slices can be specified within programs by using the slice() function in Python. For example:
>>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
>>> z[indices]
array([39, 40])
For this reason it is possible to use the output from the np.where() function directly as an index since it always returns
a tuple of index arrays.
Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an
example:
>>> z[[1,1,1,1]] # produces a large array
array([[[[27, 28, 29],
[30, 31, 32], ...
>>> z[(1,1,1,1)] # returns a single value
40
2.5 Broadcasting
See also:
numpy.broadcast
The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject
to certain constraints, the smaller array is broadcast across the larger array so that they have compatible shapes.
Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does
this without making needless copies of data and usually leads to efficient algorithm implementations. There are,
however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation.
26
NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two
arrays must have exactly the same shape, as in the following example:
>>> a =
>>> b =
>>> a *
array([
NumPys broadcasting rule relaxes this constraint when the arrays shapes meet certain constraints. The simplest
broadcasting example occurs when an array and a scalar value are combined in an operation:
>>> a =
>>> b =
>>> a *
array([
The result is equivalent to the previous example where b was an array. We can think of the scalar b being stretched
during the arithmetic operation into an array with the same shape as a. The new elements in b are simply copies
of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar
value without actually making copies, so that broadcasting operations are as memory and computationally efficient as
possible.
The code in the second example is more efficient than that in the first because broadcasting moves less memory around
during the multiplication (b is a scalar rather than an array).
When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched
or copied to match the other.
In the following example, both the A and B arrays have axes with length one that are expanded to a larger size during
the broadcast operation:
A
(4d array):
B
(3d array):
Result (4d array):
8 x 1 x 6 x 1
7 x 1 x 5
8 x 7 x 6 x 5
2.5. Broadcasting
27
A
(2d array):
B
(1d array):
Result (2d array):
5 x 4
1
5 x 4
A
(2d array):
B
(1d array):
Result (2d array):
5 x 4
4
5 x 4
A
(3d array):
B
(3d array):
Result (3d array):
15 x 3 x 5
15 x 1 x 5
15 x 3 x 5
A
(3d array):
B
(2d array):
Result (3d array):
15 x 3 x 5
3 x 5
15 x 3 x 5
A
(3d array):
B
(2d array):
Result (3d array):
15 x 3 x 5
3 x 1
15 x 3 x 5
(1d array):
(1d array):
3
4 # trailing dimensions do not match
A
B
(2d array):
(3d array):
2 x 1
8 x 4 x 3 # second from last dimensions mismatched
x = np.arange(4)
xx = x.reshape(4,1)
y = np.ones(5)
z = np.ones((3,4))
>>> x.shape
(4,)
>>> y.shape
(5,)
>>> x + y
<type exceptions.ValueError>: shape mismatch: objects cannot be broadcast to a single shape
>>> xx.shape
(4, 1)
>>> y.shape
(5,)
>>> (xx + y).shape
(4, 5)
>>> xx +
array([[
[
[
[
28
y
1.,
2.,
3.,
4.,
1.,
2.,
3.,
4.,
1.,
2.,
3.,
4.,
1.,
2.,
3.,
4.,
1.],
2.],
3.],
4.]])
>>> x.shape
(4,)
>>> z.shape
(3, 4)
>>> (x + z).shape
(3, 4)
>>> x + z
array([[ 1.,
[ 1.,
[ 1.,
2.,
2.,
2.,
3.,
3.,
3.,
4.],
4.],
4.]])
Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The
following example shows an outer addition operation of two 1-d arrays:
>>> a = np.array([0.0, 10.0, 20.0, 30.0])
>>> b = np.array([1.0, 2.0, 3.0])
>>> a[:, np.newaxis] + b
array([[ 1.,
2.,
3.],
[ 11., 12., 13.],
[ 21., 22., 23.],
[ 31., 32., 33.]])
Here the newaxis index operator inserts a new axis into a, making it a two-dimensional 4x1 array. Combining the
4x1 array with b, which has shape (3,), yields a 4x3 array.
See this article for illustrations of broadcasting concepts.
2.6 Byte-swapping
2.6.1 Introduction to byte ordering and ndarrays
The ndarray is an object that provide a python array interface to data in memory.
It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer
on which you are running Python.
For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded
some data from a file written by a computer that is big-endian. Lets say I have loaded 4 bytes from a file written
by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a
two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus
the bytes are, in memory order:
1. MSB integer 1
2. LSB integer 1
3. MSB integer 2
4. LSB integer 2
Lets say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain
respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents:
2.6. Byte-swapping
29
We might want to use an ndarray to access these integers. In that case, we can create an array around this memory,
and tell numpy that there are two integers, and that they are 16 bit and big-endian:
>>>
>>>
>>>
1
>>>
770
import numpy as np
big_end_arr = np.ndarray(shape=(2,),dtype=>i2, buffer=big_end_str)
big_end_arr[0]
big_end_arr[1]
Note the array dtype above of >i2. The > means big-endian (< is little-endian) and i2 means signed 2-byte
integer. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be
<u4.
In fact, why dont we try that?
>>> little_end_u4 = np.ndarray(shape=(1,),dtype=<u4, buffer=big_end_str)
>>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3
True
Returning to our big_end_arr - in this case our underlying data is big-endian (data endianness) and weve set the
dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around.
Warning: Scalars currently do not include byte order information, so extracting a scalar from an array will return
an integer in native byte order. Hence:
>>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder
True
30
The obvious fix for this situation is to change the dtype so it gives the correct endianness:
>>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder()
>>> fixed_end_dtype_arr[0]
1
Data and type endianness dont match, change data to match dtype
You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing
the memory out to a file that needs a certain byte ordering.
>>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap()
>>> fixed_end_mem_arr[0]
1
An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method:
>>> swapped_end_arr = big_end_arr.astype(<i2)
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
31
Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three
items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we
get the second structure:
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the string that names that field.
>>> y = x[foo]
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, Hello), (2, 6.0, World)],
dtype=[(foo, >i4), (bar, >f4), (baz, |S10)])
In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being
a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when
we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well.
Likewise, if one changes the structured array, the field view also changes:
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, Hello), (-1, -1.0, Master)],
dtype=[(foo, >i4), (bar, >f4), (baz, |S10)])
>>> y
array([ 4., -1.], dtype=float32)
32
These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each
type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an
array within a record. That array is still referred to as a single field. An example:
>>> x = np.zeros(3, dtype=3int8, float32, (2,3)float64)
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[(f0, |i1, 3), (f1, >f4), (f2, >f8, (2, 3))])
By using strings to define the record structure, it precludes being able to name the fields in the original definition. The
names can be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an
existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using
any of the variants being described here). As an example (using a definition using a list, so see 3) for further details):
>>> x = np.zeros(3, dtype=(i4,[(r,u1), (g,u1), (b,u1), (a,u1)]))
>>> x
array([0, 0, 0])
>>> x[r]
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that
use only one byte of the int32 (a bit like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements
specifying: 1) The name of the field ( is permitted), 2) the type of the field, and 3) the shape (optional). For example:
>>> x = np.zeros(3, dtype=[(x,f4),(y,np.float32),(value,f4,(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[(x, >f4), (y, >f4), (value, >f4, (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys
(names and formats), each having an equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional keys: offsets and titles. Each must
be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles
are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted.
As an example:
>>> x = np.zeros(3, dtype={names:[col1, col2], formats:[i4,f4]})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(col1, >i4), (col2, >f4)])
The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an
optional title.
>>> x = np.zeros(3, dtype={col1:(i1,0,title 1), col2:(f4,1,title 2)})
>>> x
33
The fields are returned in the order they are asked for.:
>>> x[[y,x]]
array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)],
dtype=[(y, <f4), (x, <f4)])
34
If you fill it in row by row, it takes a take a tuple (but not a list or array!):
>>> arr[0] = (10,20)
>>> arr
array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)],
dtype=[(var1, <f8), (var2, <f8)])
numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays:
>>> arr = array([(1,2.,Hello),(2,3.,"World")],
...
dtype=[(foo, i4), (bar, f4), (baz, S10)])
>>> recordarr = np.rec.array(arr)
The numpy.rec module provides a number of other convenience functions for creating record arrays, see record array
creation routines.
A record array representation of a structured array can be obtained using the appropriate view:
>>> arr = np.array([(1,2.,Hello),(2,3.,"World")],
...
dtype=[(foo, i4),(bar, f4), (baz, a10)])
>>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)),
...
type=np.recarray)
For convenience, viewing an ndarray as type np.recarray will automatically convert to np.record datatype, so the dtype
can be left out of the view:
>>> recordarr = arr.view(np.recarray)
>>> recordarr.dtype
dtype((numpy.record, [(foo, <i4), (bar, <f4), (baz, S10)]))
To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account
the unusual case that the recordarr was not a structured type:
>>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
35
Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but
as a plain ndarray otherwise.
>>> recordarr = np.rec.array([(Hello, (1,2)),("World", (3,4))],
...
dtype=[(foo, S6),(bar, [(A, int), (B, int)])])
>>> type(recordarr.foo)
<type numpy.ndarray>
>>> type(recordarr.bar)
<class numpy.core.records.recarray>
Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will
be inaccessible by attribute but may still be accessed by index.
2.8.2 Introduction
Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this
page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass.
ndarrays and object creation
Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different
ways. These are:
1. Explicit constructor call - as in MySubClass(params). This is the usual route to Python instance creation.
2. View casting - casting an existing ndarray as a given subclass
3. New from template - creating a new instance from a template instance. Examples include returning slices from
a subclassed array, creating return types from ufuncs, and copying arrays. See Creating new from template for
more details
The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of
subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation.
36
import numpy as np
# create a completely useless ndarray subclass
class C(np.ndarray): pass
# create a standard ndarray
arr = np.zeros((3,))
# take a view of it, as our useless subclass
c_arr = arr.view(C)
>>> type(c_arr)
<class C>
The slice is a view onto the original c_arr data. So, when we take a view from the ndarray, we return a new ndarray,
of the same class, that points to the data in the original.
There are other points in the use of ndarrays where we need such views, such as copying arrays (c_arr.copy()),
creating ufunc output arrays (see also __array_wrap__ for ufuncs), and reducing methods (like c_arr.mean().
37
When we call C(hello), the __new__ method gets its own class as first argument, and the passed argument,
which is the string hello. After python calls __new__, it usually (see below) calls our __init__ method, with
the output of __new__ as the first argument (now a class instance), and the passed arguments following.
As you can see, the object can be initialized in the __new__ method or the __init__ method, or both, and in fact
ndarray does not have an __init__ method, because all the initialization is done in the __new__ method.
Why use __new__ rather than just the usual __init__? Because in some cases, as for ndarray, we want to be able
to return an object of some other class. Consider the following:
class D(C):
def __new__(cls, *args):
print D cls is:, cls
print D args in __new__:, args
return C.__new__(C, *args)
def __init__(self, *args):
# we never get here
print In D __init__
meaning that:
>>> obj = D(hello)
D cls is: <class D>
D args in __new__: (hello,)
Cls in __new__: <class C>
Args in __new__: (hello,)
>>> type(obj)
<class C>
The definition of C is the same as before, but for D, the __new__ method returns an instance of class C rather than D.
Note that the __init__ method of D does not get called. In general, when the __new__ method returns an object
of class other than the class in which it is defined, the __init__ method of that class is not called.
This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view,
the standard ndarray machinery creates the new ndarray object with something like:
obj = ndarray.__new__(subtype, shape, ...
where subdtype is the subclass. Thus the returned view is of the same class as the subclass, rather than being of
class ndarray.
That solves the problem of returning views of the same type, but now we have a new problem. The machinery of
ndarray can set the class this way, in its standard methods for taking views, but the ndarray __new__ method knows
nothing of what we have done in our own __new__ method in order to set attributes, and so on. (Aside - why not
call obj = subdtype.__new__(... then? Because we may not have a __new__ method with the same call
signature).
38
Now:
>>> # Explicit constructor
>>> c = C((10,))
In __new__ with class <class C>
In array_finalize:
self type is <class C>
obj type is <type NoneType>
In __init__ with class <class C>
>>> # View casting
>>> a = np.arange(10)
>>> cast_a = a.view(C)
In array_finalize:
self type is <class C>
39
ndarray.__new__ passes __array_finalize__ the new object, of our own class (self) as well as the object
from which the view has been taken (obj). As you can see from the output above, the self is always a newly created
instance of our subclass, and the type of obj differs for the three instance creation methods:
When called from the explicit constructor, obj is None
When called from view casting, obj can be an instance of any subclass of ndarray, including our own.
When called in new-from-template, obj is another instance of our own subclass, that we might use to update
the new self instance.
Because __array_finalize__ is the only method that always sees new instances being created, it is the sensible
place to fill in instance defaults for new object attributes, among other tasks.
This may be clearer with an example.
40
This class isnt very useful, because it has the same constructor as the bare ndarray object, including passing in buffers
and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the
usual numpy calls to np.array and return an object.
41
So:
>>> arr = np.arange(5)
>>> obj = RealisticInfoArray(arr, info=information)
>>> type(obj)
<class RealisticInfoArray>
>>> obj.info
information
>>> v = obj[1:]
>>> type(v)
<class RealisticInfoArray>
>>> v.info
information
42
Note that the ufunc (np.add) has called the __array_wrap__ method of the input with the highest __array_priority__ value, in this case MySubClass.__array_wrap__, with arguments self
as obj, and out_arr as the (ndarray) result of the addition. In turn, the default __array_wrap__
(ndarray.__array_wrap__) has cast the result to class MySubClass, and called __array_finalize__ hence the copying of the info attribute. This has all happened at the C level.
But, we could do anything we wanted:
class SillySubClass(np.ndarray):
def __array_wrap__(self, arr, context=None):
return I lost your data
>>> arr1 = np.arange(5)
>>> obj = arr1.view(SillySubClass)
>>> arr2 = np.arange(5)
>>> ret = np.multiply(obj, arr2)
>>> ret
I lost your data
So, by defining a specific __array_wrap__ method for our subclass, we can tweak the output from ufuncs. The
__array_wrap__ method requires self, then an argument - which is the result of the ufunc - and an optional
parameter context. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument
of the ufunc, domain of the ufunc). __array_wrap__ should return an instance of its containing class. See the
masked array subclass for an implementation.
In addition to __array_wrap__, which is called on the way out of the ufunc, there is also an
__array_prepare__ method which is called on the way into the ufunc, after the output arrays are created but
before any computation has been performed. The default implementation does nothing but pass through the array.
__array_prepare__ should not attempt to access the array data or resize the array, it is intended for setting the
output array type, updating attributes and metadata, and performing any checks based on the input that may be desired
before computation begins. Like __array_wrap__, __array_prepare__ must return an ndarray or subclass
thereof or raise an error.
43
>>> v1 = arr[1:]
>>> # base now points to the array that it derived from
>>> v1.base is arr
True
>>> # Take a view of a view
>>> v2 = v1[1:]
>>> # base points to the view it derived from
>>> v2.base is v1
True
In general, if the array owns its own memory, as for arr in this case, then arr.base will be None - there are some
exceptions to this - see the numpy book for more details.
The base attribute is useful in being able to tell whether we have a view or the original array. This in turn can
be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For
example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how
this can work, have a look at the memmap class in numpy.core.
44
CHAPTER
THREE
PERFORMANCE
45
46
Chapter 3. Performance
CHAPTER
FOUR
MISCELLANEOUS
The following corresponds to the usual functions except that nans are excluded from the results:
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
47
4.3 Examples
>>> oldsettings = np.seterr(all=warn)
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under=ignore)
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid=raise)
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
...
print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all=call)
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
...
# error-handling settings
4.4 Interfacing to C
Only a survey of the choices. Little detail on how each works.
1. Bare metal, wrap your own C-code manually.
Plusses:
48
Chapter 4. Miscellaneous
Efficient
No dependencies on other tools
Minuses:
Lots of learning overhead:
* need to learn basics of Python C API
* need to learn basics of numpy C API
* need to learn how to handle reference counting and love it.
Reference counting often difficult to get right.
* getting it wrong leads to memory leaks, and worse, segfaults
API will change for Python 3.0!
2. Cython
Plusses:
avoid learning C APIs
no dealing with reference counting
can code in pseudo python and generate C code
can also interface to existing C code
should shield you from changes to Python C api
has become the de-facto standard within the scientific Python community
fast indexing support for arrays
Minuses:
Can write code in non-standard form which may become obsolete
Not as flexible as manual wrapping
4. ctypes
Plusses:
part of Python standard library
good for interfacing to existing sharable libraries, particularly Windows DLLs
avoids API/reference counting issues
good numpy support: arrays have all these in their ctypes attribute:
a.ctypes.data
a.ctypes.data_as
a.ctypes.get_as_parameter
a.ctypes.get_data
a.ctypes.get_shape
a.ctypes.get_strides
a.ctypes.shape
a.ctypes.shape_as
a.ctypes.strides
a.ctypes.strides_as
Minuses:
cant use for writing code to be turned into C extensions, only a wrapper tool.
5. SWIG (automatic wrapper generator)
Plusses:
4.4. Interfacing to C
49
50
Chapter 4. Miscellaneous
51
52
Chapter 4. Miscellaneous
CHAPTER
FIVE
53
multiple modules will be defined by that file. However, there are some tricks to get that to work correctly and it is not
covered here.
A minimal init{name} method looks like:
PyMODINIT_FUNC
init{name}(void)
{
(void)Py_InitModule({name}, mymethods);
import_array();
}
The mymethods must be an array (usually statically declared) of PyMethodDef structures which contain method
names, actual C-functions, a variable indicating whether the method uses keyword arguments or not, and docstrings.
These are explained in the next section. If you want to add constants to the module, then you store the returned
value from Py_InitModule which is a module object. The most general way to add items to the module is to get the
module dictionary using PyModule_GetDict(module). With the module dictionary, you can add whatever you like to
the module manually. An easier way to add objects to the module is to use one of three additional Python C-API calls
that do not require a separate extraction of the module dictionary. These are documented in the Python documentation,
but repeated here for convenience:
int PyModule_AddObject(PyObject* module, char* name, PyObject* value)
int PyModule_AddIntConstant(PyObject* module, char* name, long value)
int PyModule_AddStringConstant(PyObject* module, char* name, char* value)
All three of these functions require the module object (the return value of Py_InitModule). The name is a string
that labels the value in the module. Depending on which function is called, the value argument is either a
general object (PyModule_AddObject steals a reference to it), an integer constant, or a string constant.
Each entry in the mymethods array is a PyMethodDef structure containing 1) the Python name, 2) the C-function
that implements the function, 3) flags indicating whether or not keywords are accepted for this function, and 4) The
docstring for the function. Any number of functions may be defined for a single module by adding more entries to this
table. The last entry must be all NULL as shown to act as a sentinel. Python looks for this entry to know that all of the
functions for the module have been defined.
The last thing that must be done to finish the extension module is to actually write the code that performs the desired
functions. There are two kinds of functions: those that dont accept keyword arguments, and those that do.
54
The dummy argument is not used in this context and can be safely ignored. The args argument contains all of the
arguments passed in to the function as a tuple. You can do anything you want at this point, but usually the easiest way
to manage the input arguments is to call PyArg_ParseTuple (args, format_string, addresses_to_C_variables...)
or PyArg_UnpackTuple (tuple, name , min, max, ...). A good description of how to use the first function is
contained in the Python C-API reference manual under section 5.5 (Parsing arguments and building values). You
should pay particular attention to the O& format which uses converter functions to go between the Python object and the C object. All of the other format functions can be (mostly) thought of as special cases of this general
rule. There are several converter functions defined in the NumPy C-API that may be of use. In particular, the
PyArray_DescrConverter function is very useful to support arbitrary data-type specification. This function
transforms any valid data-type Python object into a PyArray_Descr * object. Remember to pass in the address of
the C-variables that should be filled in.
There are lots of examples of how to use PyArg_ParseTuple throughout the NumPy source code. The standard
usage is like this:
PyObject *input;
PyArray_Descr *dtype;
if (!PyArg_ParseTuple(args, "OO&", &input,
PyArray_DescrConverter,
&dtype)) return NULL;
It is important to keep in mind that you get a borrowed reference to the object when using the O format string.
However, the converter functions usually require some form of memory handling. In this example, if the conversion is
successful, dtype will hold a new reference to a PyArray_Descr * object, while input will hold a borrowed reference. Therefore, if this conversion were mixed with another conversion (say to an integer) and the data-type conversion
was successful but the integer conversion failed, then you would need to release the reference count to the data-type
object before returning. A typical way to do this is to set dtype to NULL before calling PyArg_ParseTuple and
then use Py_XDECREF on dtype before returning.
After the input arguments are processed, the code that actually does the work is written (likely calling other functions
as needed). The final step of the C-function is to return something. If an error is encountered then NULL should be
returned (making sure an error has actually been set). If nothing should be returned then increment Py_None and
return it. If a single object should be returned then it is returned (ensuring that you own a reference to it first). If multiple objects should be returned then you need to return a tuple. The Py_BuildValue (format_string, c_variables...)
function makes it easy to build tuples of Python objects from C variables. Pay special attention to the difference between N and O in the format string or you can easily create memory leaks. The O format string increments the
reference count of the PyObject * C-variable it corresponds to, while the N format string steals a reference to the
corresponding PyObject * C-variable. You should use N if you have already created a reference for the object
and just want to give that reference to the tuple. You should use O if you only have a borrowed reference to an object
and need to create one to provide for the tuple.
55
The kwds argument holds a Python dictionary whose keys are the names of the keyword arguments and whose values
are the corresponding keyword-argument values. This dictionary can be processed however you see fit. The easiest
way to handle it, however, is to replace the PyArg_ParseTuple (args, format_string, addresses...) function with
a call to PyArg_ParseTupleAndKeywords (args, kwds, format_string, char *kwlist[], addresses...). The kwlist
parameter to this function is a NULL -terminated array of strings providing the expected keyword arguments. There
should be one string for each entry in the format_string. Using this function will raise a TypeError if invalid keyword
arguments are passed in.
For more help on this function please see section 1.8 (Keyword Paramters for Extension Functions) of the Extending
and Embedding tutorial in the Python documentation.
Reference counting
The biggest difficulty when writing extension modules is reference counting. It is an important reason for the popularity of f2py, weave, Cython, ctypes, etc.... If you mis-handle reference counts you can get problems from memory-leaks
to segmentation faults. The only strategy I know of to handle reference counts correctly is blood, sweat, and tears.
First, you force it into your head that every Python variable has a reference count. Then, you understand exactly
what each function does to the reference count of your objects, so that you can properly use DECREF and INCREF
when you need them. Reference counting can really test the amount of patience and diligence you have towards your
programming craft. Despite the grim depiction, most cases of reference counting are quite straightforward with the
most common difficulty being not using DECREF on objects before exiting early from a routine due to some error. In
second place, is the common error of not owning the reference on an object that is passed to a function or macro that is
going to steal the reference ( e.g. PyTuple_SET_ITEM, and most functions that take PyArray_Descr objects).
Typically you get a new reference to a variable when it is created or is the return value of some function (there are
some prominent exceptions, however such as getting an item out of a tuple or a dictionary). When you own the
reference, you are responsible to make sure that Py_DECREF (var) is called when the variable is no longer necessary
(and no other function has stolen its reference). Also, if you are passing a Python object to a function that will steal
the reference, then you need to make sure you own it (or use Py_INCREF to get your own reference). You will also
encounter the notion of borrowing a reference. A function that borrows a reference does not alter the reference count
of the object and does not expect to hold on to the reference. Its just going to use the object temporarily. When you
use PyArg_ParseTuple or PyArg_UnpackTuple you receive a borrowed reference to the objects in the tuple
and should not alter their reference count inside your function. With practice, you can learn to get reference counting
right, but it can be frustrating at first.
One common source of reference-count errors is the Py_BuildValue function. Pay careful attention to the difference between the N format character and the O format character. If you create a new object in your subroutine
(such as an output array), and you are passing it back in a tuple of return values, then you should most- likely use
the N format character in Py_BuildValue. The O character will increase the reference count by one. This will
leave the caller with two reference counts for a brand-new array. When the variable is deleted and the reference count
decremented by one, there will still be that extra reference count, and the array will never be deallocated. You will
have a reference-counting induced memory leak. Using the N character will avoid this situation as it will return to
the caller an object (inside the tuple) with a single reference count.
56
57
NPY_DOUBLE,
NPY_LONGDOUBLE,
NPY_CLONGDOUBLE, NPY_OBJECT.
NPY_CFLOAT,
NPY_CDOUBLE,
Alternatively, the bit-width names can be used as supported on the platform. For example:
NPY_INT8, NPY_INT16, NPY_INT32, NPY_INT64, NPY_UINT8, NPY_UINT16,
NPY_UINT32, NPY_UINT64, NPY_FLOAT32, NPY_FLOAT64, NPY_COMPLEX64,
NPY_COMPLEX128.
The object will be converted to the desired type only if it can be done without losing precision.
Otherwise NULL will be returned and an error raised. Use NPY_FORCECAST in the requirements
flag to override this behavior.
requirements
The memory model for an ndarray admits arbitrary strides in each dimension to advance to the next
element of the array. Often, however, you need to interface with code that expects a C-contiguous
or a Fortran-contiguous memory layout. In addition, an ndarray can be misaligned (the address of
an element is not at an integral multiple of the size of the element) which can cause your program
to crash (or at least work more slowly) if you try and dereference a pointer into the array data. Both
of these problems can be solved by converting the Python object into an array that is more wellbehaved for your specific usage.
The requirements flag allows specification of what kind of array is acceptable. If the object passed
in does not satisfy this requirements then a copy is made so that thre returned object will satisfy the
requirements. these ndarray can use a very generic pointer to memory. This flag allows specification
of the desired properties of the returned array object. All of the flags are explained in the detailed
API chapter. The flags most commonly needed are NPY_ARRAY_IN_ARRAY, NPY_OUT_ARRAY,
and NPY_ARRAY_INOUT_ARRAY:
NPY_ARRAY_IN_ARRAY
Equivalent to NPY_ARRAY_C_CONTIGUOUS | NPY_ARRAY_ALIGNED. This combination of
flags is useful for arrays that must be in C-contiguous order and aligned. These kinds of arrays
are usually input arrays for some algorithm.
NPY_ARRAY_OUT_ARRAY
Equivalent
to
NPY_ARRAY_C_CONTIGUOUS
|
NPY_ARRAY_ALIGNED
|
NPY_ARRAY_WRITEABLE. This combination of flags is useful to specify an array that
is in C-contiguous order, is aligned, and can be written to as well. Such an array is usually
returned as output (although normally such output arrays are created from scratch).
NPY_ARRAY_INOUT_ARRAY
Equivalent
to
NPY_ARRAY_C_CONTIGUOUS
|
NPY_ARRAY_ALIGNED
|
NPY_ARRAY_WRITEABLE | NPY_ARRAY_UPDATEIFCOPY. This combination of flags
is useful to specify an array that will be used for both input and output. If a copy is needed,
then when the temporary is deleted (by your use of Py_DECREF at the end of the interface
routine), the temporary array will be copied back into the original array passed in. Use of
the NPY_ARRAY_UPDATEIFCOPY flag requires that the input object is already an array
(because other objects cannot be automatically updated in this fashion). If an error occurs use
PyArray_DECREF_ERR (obj) on an array with the NPY_ARRAY_UPDATEIFCOPY flag set.
This will delete the array without causing the contents to be copied back into the original array.
Other useful flags that can be ORd as additional requirements are:
NPY_ARRAY_FORCECAST
Cast to the desired type, even if it cant be done without losing information.
NPY_ARRAY_ENSURECOPY
Make sure the resulting array is a copy of the original.
58
NPY_ARRAY_ENSUREARRAY
Make sure the resulting object is an actual ndarray and not a sub- class.
Note: Whether or not an array is byte-swapped is determined by the data-type of the array. Native byte-order arrays
are always requested by PyArray_FROM_OTF and so there is no need for a NPY_ARRAY_NOTSWAPPED flag in the
requirements argument. There is also no way to get a byte-swapped array from this routine.
59
and PyArray_ISFORTRAN (obj) respectively. Most third-party libraries expect contiguous arrays. But, often it is
not difficult to support general-purpose striding. I encourage you to use the striding information in your own code
whenever possible, and reserve single-segment requirements for wrapping third-party code. Using the striding information provided with the ndarray rather than requiring a contiguous striding reduces copying that otherwise must be
made.
5.1.5 Example
The following example shows how you might write a wrapper that accepts two input arguments (that will be converted
to an array) and an output argument (that must be an array). The function returns None and updates the output array.
static PyObject *
example_wrapper(PyObject *dummy, PyObject *args)
{
PyObject *arg1=NULL, *arg2=NULL, *out=NULL;
PyObject *arr1=NULL, *arr2=NULL, *oarr=NULL;
if (!PyArg_ParseTuple(args, "OOO!", &arg1, &arg2,
&PyArray_Type, &out)) return NULL;
arr1 = PyArray_FROM_OTF(arg1, NPY_DOUBLE, NPY_IN_ARRAY);
if (arr1 == NULL) return NULL;
arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_IN_ARRAY);
if (arr2 == NULL) goto fail;
oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_INOUT_ARRAY);
if (oarr == NULL) goto fail;
/* code that makes use of arguments */
/* You will probably need at least
nd = PyArray_NDIM(<..>)
-- number of dimensions
dims = PyArray_DIMS(<..>) -- npy_intp array of length nd
showing length in each dim.
dptr = (double *)PyArray_DATA(<..>) -- pointer to data.
If an error occurs goto fail.
*/
Py_DECREF(arr1);
Py_DECREF(arr2);
Py_DECREF(oarr);
Py_INCREF(Py_None);
return Py_None;
fail:
Py_XDECREF(arr1);
Py_XDECREF(arr2);
PyArray_XDECREF_ERR(oarr);
return NULL;
}
60
Michel de Montaigne
Duct tape is like the force. It has a light side, and a dark side, and
it holds the universe together.
Carl Zwanzig
Many people like to say that Python is a fantastic glue language. Hopefully, this Chapter will convince you that this is
true. The first adopters of Python for science were typically people who used it to glue together large application codes
running on super-computers. Not only was it much nicer to code in Python than in a shell script or Perl, in addition,
the ability to easily extend Python made it relatively easy to create new classes and types specifically adapted to the
problems being solved. From the interactions of these early contributors, Numeric emerged as an array-like object that
could be used to pass data between these applications.
As Numeric has matured and developed into NumPy, people have been able to write more code directly in NumPy.
Often this code is fast-enough for production use, but there are still times that there is a need to access compiled code.
Either to get that last bit of efficiency out of the algorithm or to make it easier to access widely-available codes written
in C/C++ or Fortran.
This chapter will review many of the tools that are available for the purpose of accessing code written in other compiled
languages. There are many resources available for learning to call other compiled libraries from Python and the
purpose of this Chapter is not to make you an expert. The main goal is to make you aware of some of the possibilities
so that you will know what to Google in order to learn more.
61
Once the conversions to the appropriate C-structures and C data-types have been performed, the next step in the
wrapper is to call the underlying function. This is straightforward if the underlying function is in C or C++. However,
in order to call Fortran code you must be familiar with how Fortran subroutines are called from C/C++ using your
compiler and platform. This can vary somewhat platforms and compilers (which is another reason f2py makes life
much simpler for interfacing Fortran code) but generally involves underscore mangling of the name and the fact that
all variables are passed by reference (i.e. all arguments are pointers).
The advantage of the hand-generated wrapper is that you have complete control over how the C-library gets used and
called which can lead to a lean and tight interface with minimal over-head. The disadvantage is that you have to
write, debug, and maintain C-code, although most of it can be adapted using the time-honored technique of cuttingpasting-and-modifying from other extension modules. Because, the procedure of calling out to additional C-code is
fairly regimented, code-generation procedures have been developed to make this process easier. One of these codegeneration techniques is distributed with NumPy and allows easy integration with Fortran and (simple) C code. This
package, f2py, will be covered briefly in the next section.
5.2.3 f2py
F2py allows you to automatically construct an extension module that interfaces to routines in Fortran 77/90/95 code.
It has the ability to parse Fortran 77/90/95 code and automatically generate Python signatures for the subroutines it
encounters, or you can guide how the subroutine interfaces with Python by constructing an interface-definition-file (or
modifying the f2py-produced one).
Creating source for a basic extension module
Probably the easiest way to introduce f2py is to offer a simple example. Here is one of the subroutines contained in a
file named add.f:
C
SUBROUTINE ZADD(A,B,C,N)
C
20
This routine simply adds the elements in two contiguous arrays and places the result in a third. The memory for
all three arrays must be provided by the calling routine. A very basic interface to this routine can be automatically
generated by f2py:
f2py -m add add.f
You should be able to run this command assuming your search-path is set-up properly. This command will produce an
extension module named addmodule.c in the current directory. This extension module can now be compiled and used
from Python just like any other extension module.
Creating a compiled extension module
You can also get f2py to compile add.f and also compile its produced extension module leaving only a shared-library
extension file that can be imported from Python:
62
This command leaves a file named add.{ext} in the current directory (where {ext} is the appropriate extension for a
python extension module on your platform so, pyd, etc. ). This module may then be imported from Python. It
will contain a method for each subroutine in add (zadd, cadd, dadd, sadd). The docstring of each method contains
information about how the module method may be called:
>>> import add
>>> print add.zadd.__doc__
zadd - Function signature:
zadd(a,b,c,n)
Required arguments:
a : input rank-1 array(D) with bounds (*)
b : input rank-1 array(D) with bounds (*)
c : input rank-1 array(D) with bounds (*)
n : input int
will cause a program crash on most systems. Under the covers, the lists are being converted to proper arrays but then
the underlying add loop is told to cycle way beyond the borders of the allocated memory.
In order to improve the interface, directives should be provided. This is accomplished by constructing an interface
definition file. It is usually best to start from the interface file that f2py can produce (where it gets its default behavior
from). To get f2py to generate the interface file use the -h option:
f2py -h add.pyf -m add add.f
This command leaves the file add.pyf in the current directory. The section of this file corresponding to zadd is:
subroutine zadd(a,b,c,n) ! in :add:add.f
double complex dimension(*) :: a
double complex dimension(*) :: b
double complex dimension(*) :: c
integer :: n
end subroutine zadd
By placing intent directives and checking code, the interface can be cleaned up quite a bit until the Python module
method is both easier to use and more robust.
subroutine zadd(a,b,c,n) ! in :add:add.f
double complex dimension(n) :: a
double complex dimension(n) :: b
double complex intent(out),dimension(n) :: c
integer intent(hide),depend(a) :: n=len(a)
end subroutine zadd
The intent directive, intent(out) is used to tell f2py that c is an output variable and should be created by the interface
before being passed to the underlying code. The intent(hide) directive tells f2py to not allow the user to specify the
63
variable, n, but instead to get it from the size of a. The depend( a ) directive is necessary to tell f2py that the value of
n depends on the input a (so that it wont try to create the variable n until the variable a is created).
After modifying add.pyf, the new python module file can be generated by compiling both add.f95 and add.pyf:
f2py -c add.pyf add.f95
20
INTENT(OUT) :: C
INTENT(HIDE) :: N
DOUBLE COMPLEX :: A(N)
DOUBLE COMPLEX :: B(N)
DOUBLE COMPLEX :: C(N)
DOUBLE COMPLEX A(*)
DOUBLE COMPLEX B(*)
DOUBLE COMPLEX C(*)
INTEGER N
DO 20 J = 1, N
C(J) = A(J) + B(J)
CONTINUE
END
The resulting signature for the function add.zadd is exactly the same one that was created previously. If the original
source code had contained A(N) instead of A(*) and so forth with B and C, then I could obtain (nearly) the same
interface simply by placing the INTENT(OUT) :: C comment line in the source code. The only difference is that
N would be an optional input that would default to the length of A.
64
A filtering example
For comparison with the other methods to be discussed. Here is another example of a function that filters a twodimensional array of double precision floating-point numbers using a fixed averaging filter. The advantage of using
Fortran to index into multi-dimensional arrays should be clear from this example.
SUBROUTINE DFILTER2D(A,B,M,N)
C
DOUBLE PRECISION A(M,N)
DOUBLE PRECISION B(M,N)
INTEGER N, M
CF2PY INTENT(OUT) :: B
CF2PY INTENT(HIDE) :: N
CF2PY INTENT(HIDE) :: M
DO 20 I = 2,M-1
DO 40 J=2,N-1
B(I,J) = A(I,J) +
$
(A(I-1,J)+A(I+1,J) +
$
A(I,J-1)+A(I,J+1) )*0.5D0 +
$
(A(I-1,J-1) + A(I-1,J+1) +
$
A(I+1,J-1) + A(I+1,J+1))*0.25D0
40
CONTINUE
20
CONTINUE
END
This code can be compiled and linked into an extension module named filter using:
f2py -c -m filter filter.f
This will produce an extension module named filter.so in the current directory with a method named dfilter2d that
returns a filtered version of the input.
Calling f2py from Python
The f2py program is written in Python and can be run from inside your code to compile Fortran code at runtime, as
follows:
from numpy import f2py
with open("add.f") as sourcefile:
sourcecode = sourcefile.read()
f2py.compile(sourcecode, modulename=add)
import add
The source string can be any valid Fortran code. If you want to save the extension-module source code then a suitable
file-name can be provided by the source_fn keyword to the compile function.
Automatic extension module generation
If you want to distribute your f2py extension module, then you only need to include the .pyf file and the Fortran code.
The distutils extensions in NumPy allow you to define an extension module entirely in terms of this interface file. A
valid setup.py file allowing distribution of the add.f module (as part of the package f2py_examples so that it
would be loaded as f2py_examples.add) is:
def configuration(parent_package=, top_path=None)
from numpy.distutils.misc_util import Configuration
config = Configuration(f2py_examples,parent_package, top_path)
config.add_extension(add, sources=[add.pyf,add.f])
65
return config
if __name__ == __main__:
from numpy.distutils.core import setup
setup(**configuration(top_path=).todict())
assuming you have the proper permissions to write to the main site- packages directory for the version of Python you
are using. For the resulting package to work, you need to create a file named __init__.py (in the same directory
as add.pyf). Notice the extension module is defined entirely in terms of the add.pyf and add.f files. The
conversion of the .pyf file to a .c file is handled by numpy.disutils.
Conclusion
The interface definition file (.pyf) is how you can fine-tune the interface between Python and Fortran. There is decent
documentation for f2py found in the numpy/f2py/docs directory where-ever NumPy is installed on your system (usually under site-packages). There is also more information on using f2py (including how to use it to wrap C codes) at
http://www.scipy.org/Cookbook under the Using NumPy with Other Languages heading.
The f2py method of linking compiled code is currently the most sophisticated and integrated approach. It allows clean
separation of Python with compiled code while still allowing for separate distribution of the extension module. The
only draw-back is that it requires the existence of a Fortran compiler in order for a user to install the code. However,
with the existence of the free-compilers g77, gfortran, and g95, as well as high-quality commerical compilers, this
restriction is not particularly onerous. In my opinion, Fortran is still the easiest way to write fast and clear code for
scientific computing. It handles complex numbers, and multi-dimensional indexing in the most straightforward way.
Be aware, however, that some Fortran compilers will not be able to optimize code as well as good hand- written
C-code.
5.2.4 Cython
Cython is a compiler for a Python dialect that adds (optional) static typing for speed, and allows mixing C or C++
code into your modules. It produces C or C++ extensions that can be compiled and imported in Python code.
If you are writing an extension module that will include quite a bit of your own algorithmic code as well, then Cython
is a good match. Among its features is the ability to easily and quickly work with multidimensional arrays.
Notice that Cython is an extension-module generator only. Unlike f2py, it includes no automatic facility for compiling
and linking the extension module (which must be done in the usual fashion). It does provide a modified distutils
class called build_ext which lets you build an extension module from a .pyx source. Thus, you could write in a
setup.py file:
from Cython.Distutils import build_ext
from distutils.extension import Extension
from distutils.core import setup
import numpy
setup(name=mine, description=Nothing,
ext_modules=[Extension(filter, [filter.pyx],
include_dirs=[numpy.get_include()])],
cmdclass = {build_ext:build_ext})
Adding the NumPy include directory is, of course, only necessary if you are using NumPy arrays in the extension
module (which is what we assume you are using Cython for). The distutils extensions in NumPy also include support
66
for automatically producing the extension-module and linking it from a .pyx file. It works so that if the user does not
have Cython installed, then it looks for a file with the same file-name but a .c extension which it then uses instead of
trying to produce the .c file again.
If you just use Cython to compile a standard Python module, then you will get a C extension module that typically
runs a bit faster than the equivalent Python module. Further speed increases can be gained by using the cdef keyword
to statically define C variables.
Lets look at two examples weve seen before to see how they might be implemented using Cython. These examples
were compiled into extension modules using Cython 0.21.1.
Complex addition in Cython
Here is part of a Cython module named add.pyx which implements the complex addition functions we previously
implemented using f2py:
cimport cython
cimport numpy as np
import numpy as np
# We need to initialize NumPy.
np.import_array()
#@cython.boundscheck(False)
def zadd(in1, in2):
cdef double complex[:] a = in1.ravel()
cdef double complex[:] b = in2.ravel()
out = np.empty(a.shape[0], np.complex64)
cdef double complex[:] c = out.ravel()
for i in range(c.shape[0]):
c[i].real = a[i].real + b[i].real
c[i].imag = a[i].imag + b[i].imag
return out
This module shows use of the cimport statement to load the definitions from the numpy.pxd header that ships
with Cython. It looks like NumPy is imported twice; cimport only makes the NumPy C-API available, while the
regular import causes a Python-style import at runtime and makes it possible to call into the familiar NumPy Python
API.
The example also demonstrates Cythons typed memoryviews, which are like NumPy arrays at the C level, in the
sense that they are shaped and strided arrays that know their own extent (unlike a C array addressed through a bare
pointer). The syntax double complex[:] denotes a one-dimensional array (vector) of doubles, with arbitrary
strides. A contiguous array of ints would be int[::1], while a matrix of floats would be float[:, :].
Shown commented is the cython.boundscheck decorator, which turns bounds-checking for memory view accesses on or off on a per-function basis. We can use this to further speed up our code, at the expense of safety (or a
manual check prior to entering the loop).
Other than the view syntax, the function is immediately readable to a Python programmer. Static typing of the variable
i is implicit. Instead of the view syntax, we could also have used Cythons special NumPy array syntax, but the view
syntax is preferred.
67
This 2-d averaging filter runs quickly because the loop is in C and the pointer computations are done only as needed.
If the code above is compiled as a module image, then a 2-d image, img, can be filtered using this code very quickly
using:
import image
out = image.filter(img)
Regarding the code, two things are of note: firstly, it is impossible to return a memory view to Python. Instead, a
NumPy array out is first created, and then a view b onto this array is used for the computation. Secondly, the view b
is typed double[:, ::1]. This means 2-d array with contiguous rows, i.e., C matrix order. Specifying the order
explicitly can speed up some algorithms since they can skip stride computations.
Conclusion
Cython is the extension mechanism of choice for several scientific Python libraries, including Scipy, Pandas, SAGE,
scikit-image and scikit-learn, as well as the XML processing library LXML. The language and compiler are wellmaintained.
There are several disadvantages of using Cython:
1. When coding custom algorithms, and sometimes when wrapping existing C libraries, some familiarity with C
is required. In particular, when using C memory management (malloc and friends), its easy to introduce
memory leaks. However, just compiling a Python module renamed to .pyx can already speed it up, and adding
a few type declarations can give dramatic speedups in some code.
2. It is easy to lose a clean separation between Python and C which makes re-using your C-code for other nonPython-related projects more difficult.
3. The C-code generated by Cython is hard to read and modify (and typically compiles with annoying but harmless
warnings).
68
One big advantage of Cython-generated extension modules is that they are easy to distribute. In summary, Cython is a
very capable tool for either gluing C code or generating an extension module quickly and should not be over-looked.
It is especially useful for people that cant or wont write C or Fortran code.
5.2.5 ctypes
Ctypes is a Python extension module, included in the stdlib, that allows you to call an arbitrary function in a shared
library directly from Python. This approach allows you to interface with C-code directly from Python. This opens
up an enormous number of libraries for use from Python. The drawback, however, is that coding mistakes can lead
to ugly program crashes very easily (just as can happen in C) because there is little type or bounds checking done
on the parameters. This is especially true when array data is passed in as a pointer to a raw memory location. The
responsibility is then on you that the subroutine will not access memory outside the actual array area. But, if you dont
mind living a little dangerously ctypes can be an effective tool for quickly taking advantage of a large shared library
(or writing extended functionality in your own shared library).
Because the ctypes approach exposes a raw interface to the compiled code it is not always tolerant of user mistakes.
Robust use of the ctypes module typically involves an additional layer of Python code in order to check the data types
and array bounds of objects passed to the underlying subroutine. This additional layer of checking (not to mention
the conversion from ctypes objects to C-data-types that ctypes itself performs), will make the interface slower than a
hand-written extension-module interface. However, this overhead should be neglible if the C-routine being called is
doing any significant amount of work. If you are a great Python programmer with weak C skills, ctypes is an easy way
to write a useful interface to a (shared) library of compiled code.
To use ctypes you must
1. Have a shared library.
2. Load the shared library.
3. Convert the python objects to ctypes-understood arguments.
4. Call the function from the library with the ctypes arguments.
Having a shared library
There are several requirements for a shared library that can be used with ctypes that are platform specific. This guide
assumes you have some familiarity with making a shared library on your system (or simply have a shared library
available to you). Items to remember are:
A shared library must be compiled in a special way ( e.g. using the -shared flag with gcc).
On some platforms (e.g. Windows) , a shared library requires a .def file that specifies the functions to be
exported. For example a mylib.def file might contain:
LIBRARY mylib.dll
EXPORTS
cool_function1
cool_function2
Alternatively, you may be able to use the storage-class specifier __declspec(dllexport) in the Cdefinition of the function to avoid the need for this .def file.
There is no standard way in Python distutils to create a standard shared library (an extension module is a special
shared library Python understands) in a cross-platform manner. Thus, a big disadvantage of ctypes at the time of
writing this book is that it is difficult to distribute in a cross-platform manner a Python extension that uses ctypes and
includes your own code which should be compiled as a shared library on the users system.
69
However, on Windows accessing an attribute of the cdll method will load the first DLL by that name found in the
current directory or on the PATH. Loading the absolute path name requires a little finesse for cross-platform work
since the extension of shared libraries varies. There is a ctypes.util.find_library utility available that can
simplify the process of finding the library to load but it is not foolproof. Complicating matters, different platforms
have different default extensions used by shared libraries (e.g. .dll Windows, .so Linux, .dylib Mac OS X). This
must also be taken into account if you are using ctypes to wrap code that needs to work on several platforms.
NumPy provides a convenience function called ctypeslib.load_library (name, path). This function takes the
name of the shared library (including any prefix like lib but excluding the extension) and a path where the shared
library can be located. It returns a ctypes library object or raises an OSError if the library cannot be found or
raises an ImportError if the ctypes module is not available. (Windows users: the ctypes library object loaded
using load_library is always loaded assuming cdecl calling convention. See the ctypes documentation under
ctypes.windll and/or ctypes.oledll for ways to load libraries under other calling conventions).
The functions in the shared library are available as attributes of the ctypes library object (returned from
ctypeslib.load_library) or as items using lib[func_name] syntax. The latter method for retrieving a function name is particularly useful if the function name contains characters that are not allowable in Python
variable names.
Converting arguments
Python ints/longs, strings, and unicode objects are automatically converted as needed to equivalent ctypes arguments
The None object is also converted automatically to a NULL pointer. All other Python objects must be converted to
ctypes-specific types. There are two ways around this restriction that allow ctypes to integrate with other objects.
1. Dont set the argtypes attribute of the function object and define an _as_parameter_ method for the object
you want to pass in. The _as_parameter_ method must return a Python int which will be passed directly to
the function.
2. Set the argtypes attribute to a list whose entries contain objects with a classmethod named from_param that
knows how to convert your object to an object that ctypes can understand (an int/long, string, unicode, or object
with the _as_parameter_ attribute).
NumPy uses both methods with a preference for the second method because it can be safer. The ctypes attribute of the
ndarray returns an object that has an _as_parameter_ attribute which returns an integer representing the address
of the ndarray to which it is associated. As a result, one can pass this ctypes attribute object directly to a function
expecting a pointer to the data in your ndarray. The caller must be sure that the ndarray object is of the correct type,
shape, and has the correct flags set or risk nasty crashes if the data-pointer to inappropriate arrays are passsed in.
To implement the second method, NumPy provides the class-factory function ndpointer in the ctypeslib module. This class-factory function produces an appropriate class that can be placed in an argtypes attribute entry of a
ctypes function. The class will contain a from_param method which ctypes will use to convert any ndarray passed in
to the function to a ctypes-recognized object. In the process, the conversion will perform checking on any properties of
the ndarray that were specified by the user in the call to ndpointer. Aspects of the ndarray that can be checked include the data-type, the number-of-dimensions, the shape, and/or the state of the flags on any array passed. The return
value of the from_param method is the ctypes attribute of the array which (because it contains the _as_parameter_
attribute pointing to the array data area) can be used by ctypes directly.
The ctypes attribute of an ndarray is also endowed with additional attributes that may be convenient when passing
additional information about the array into a ctypes function. The attributes data, shape, and strides can provide
70
ctypes compatible types corresponding to the data-area, the shape, and the strides of the array. The data attribute
reutrns a c_void_p representing a pointer to the data area. The shape and strides attributes each return an array of
ctypes integers (or None representing a NULL pointer, if a 0-d array). The base ctype of the array is a ctype integer
of the same size as a pointer on the platform. There are also methods data_as({ctype}), shape_as(<base
ctype>), and strides_as(<base ctype>). These return the data as a ctype object of your choice and the
shape/strides arrays using an underlying base type of your choice. For convenience, the ctypeslib module also
contains c_intp as a ctypes integer data-type whose size is the same as the size of c_void_p on the platform (its
value is None if ctypes is not installed).
Calling the function
The function is accessed as an attribute of or an item from the loaded shared-library. Thus, if ./mylib.so has a
function named cool_function1 , I could access this function either as:
lib = numpy.ctypeslib.load_library(mylib,.)
func1 = lib.cool_function1 # or equivalently
func1 = lib[cool_function1]
In ctypes, the return-value of a function is set to be int by default. This behavior can be changed by setting the
restype attribute of the function. Use None for the restype if the function has no return value (void):
func1.restype = None
As previously discussed, you can also set the argtypes attribute of the function in order to have ctypes check the types
of the input arguments when the function is called. Use the ndpointer factory function to generate a ready-made
class for data-type, shape, and flags checking on your new function. The ndpointer function has the signature
ndpointer(dtype=None, ndim=None, shape=None, flags=None)
Keyword arguments with the value None are not checked. Specifying a keyword enforces checking of that
aspect of the ndarray on conversion to a ctypes-compatible object. The dtype keyword can be any object
understood as a data-type object. The ndim keyword should be an integer, and the shape keyword should be
an integer or a sequence of integers. The flags keyword specifies the minimal flags that are required on any
array passed in. This can be specified as a string of comma separated requirements, an integer indicating the
requirement bits ORd together, or a flags object returned from the flags attribute of an array with the necessary
requirements.
Using an ndpointer class in the argtypes method can make it significantly safer to call a C function using ctypes and
the data- area of an ndarray. You may still want to wrap the function in an additional Python wrapper to make it
user-friendly (hiding some obvious arguments and making some arguments output arguments). In this process, the
requires function in NumPy may be useful to return the right kind of array from a given input.
Complete example
In this example, I will show how the addition function and the filter function implemented previously using the other
approaches can be implemented using ctypes. First, the C code which implements the algorithms contains the functions
zadd, dadd, sadd, cadd, and dfilter2d. The zadd function is:
/* Add arrays of contiguous data */
typedef struct {double real; double imag;} cdouble;
typedef struct {float real; float imag;} cfloat;
void zadd(cdouble *a, cdouble *b, cdouble *c, long n)
{
while (n--) {
c->real = a->real + b->real;
c->imag = a->imag + b->imag;
a++; b++; c++;
71
}
}
with similar code for cadd, dadd, and sadd that handles complex float, double, and float data-types, respectively:
void cadd(cfloat *a, cfloat *b, cfloat *c, long n)
{
while (n--) {
c->real = a->real + b->real;
c->imag = a->imag + b->imag;
a++; b++; c++;
}
}
void dadd(double *a, double *b, double *c, long n)
{
while (n--) {
*c++ = *a++ + *b++;
}
}
void sadd(float *a, float *b, float *c, long n)
{
while (n--) {
*c++ = *a++ + *b++;
}
}
\
\
\
\
A possible advantage this code has over the Fortran-equivalent code is that it takes arbitrarily strided (i.e. noncontiguous arrays) and may also run faster depending on the optimization capability of your compiler. But, it is a
obviously more complicated than the simple code in filter.f. This code must be compiled into a shared library.
On my Linux system this is accomplished using:
72
Which creates a shared_library named code.so in the current directory. On Windows dont forget to either add
__declspec(dllexport) in front of void on the line preceding each function definition, or write a code.def
file that lists the names of the functions to be exported.
A suitable Python interface to this shared library should be constructed. To do this create a file named interface.py
with the following lines at the top:
__all__ = [add, filter2d]
import numpy as N
import os
_path = os.path.dirname(__file__)
lib = N.ctypeslib.load_library(code, _path)
_typedict = {zadd : complex, sadd : N.single,
cadd : N.csingle, dadd : float}
for name in _typedict.keys():
val = getattr(lib, name)
val.restype = None
_type = _typedict[name]
val.argtypes = [N.ctypeslib.ndpointer(_type,
flags=aligned, contiguous),
N.ctypeslib.ndpointer(_type,
flags=aligned, contiguous),
N.ctypeslib.ndpointer(_type,
flags=aligned, contiguous,\
writeable),
N.ctypeslib.c_intp]
This code loads the shared library named code.{ext} located in the same path as this file. It then adds a return
type of void to the functions contained in the library. It also adds argument checking to the functions in the library so
that ndarrays can be passed as the first three arguments along with an integer (large enough to hold a pointer on the
platform) as the fourth argument.
Setting up the filtering function is similar and allows the filtering function to be called with ndarray arguments as the
first two arguments and with pointers to integers (large enough to handle the strides and shape of an ndarray) as the
last two arguments.:
lib.dfilter2d.restype=None
lib.dfilter2d.argtypes = [N.ctypeslib.ndpointer(float, ndim=2,
flags=aligned),
N.ctypeslib.ndpointer(float, ndim=2,
flags=aligned, contiguous,\
writeable),
ctypes.POINTER(N.ctypeslib.c_intp),
ctypes.POINTER(N.ctypeslib.c_intp)]
Next, define a simple selection function that chooses which addition function to call in the shared library based on the
data-type:
def select(dtype):
if dtype.char in [?bBhHf]:
return lib.sadd, single
elif dtype.char in [F]:
return lib.cadd, csingle
elif dtype.char in [DG]:
return lib.zadd, complex
73
else:
return lib.dadd, float
return func, ntype
Finally, the two functions to be exported by the interface can be written simply as:
def add(a, b):
requires = [CONTIGUOUS, ALIGNED]
a = N.asanyarray(a)
func, dtype = select(a.dtype)
a = N.require(a, dtype, requires)
b = N.require(b, dtype, requires)
c = N.empty_like(a)
func(a,b,c,a.size)
return c
and:
def filter2d(a):
a = N.require(a, float, [ALIGNED])
b = N.zeros_like(a)
lib.dfilter2d(a, b, a.ctypes.strides, a.ctypes.shape)
return b
Conclusion
Using ctypes is a powerful way to connect Python with arbitrary C-code. Its advantages for extending Python include
clean separation of C code from Python code
no need to learn a new syntax except Python and C
allows re-use of C code
functionality in shared libraries written for other purposes can be obtained with a simple Python wrapper
and search for the library.
easy integration with NumPy through the ctypes attribute
full argument checking with the ndpointer class factory
Its disadvantages include
It is difficult to distribute an extension module made using ctypes because of a lack of support for building
shared libraries in distutils (but I suspect this will change in time).
You must have shared-libraries of your code (no static libraries).
Very little support for C++ code and its different library-calling conventions. You will probably need a C
wrapper around C++ code to use with ctypes (or just use Boost.Python instead).
Because of the difficulty in distributing an extension module made using ctypes, f2py and Cython are still the easiest
ways to extend Python for package creation. However, ctypes is in some cases a useful alternative. This should bring
more features to ctypes that should eliminate the difficulty in extending Python and distributing the extension using
ctypes.
dont know much about them (SIP, Boost). I have not added links to these methods because my experience is that you
can find the most relevant link faster using Google or some other search engine, and any links provided here would
be quickly dated. Do not assume that just because it is included in this list, I dont think the package deserves your
attention. Im including information about these packages because many people have found them useful and Id like
to give you as many options as possible for tackling the problem of easily integrating your code.
SWIG
Simplified Wrapper and Interface Generator (SWIG) is an old and fairly stable method for wrapping C/C++-libraries
to a large variety of other languages. It does not specifically understand NumPy arrays but can be made useable
with NumPy through the use of typemaps. There are some sample typemaps in the numpy/tools/swig directory under
numpy.i together with an example module that makes use of them. SWIG excels at wrapping large C/C++ libraries
because it can (almost) parse their headers and auto-produce an interface. Technically, you need to generate a .i file
that defines the interface. Often, however, this .i file can be parts of the header itself. The interface usually needs a bit
of tweaking to be very useful. This ability to parse C/C++ headers and auto-generate the interface still makes SWIG
a useful approach to adding functionalilty from C/C++ into Python, despite the other methods that have emerged that
are more targeted to Python. SWIG can actually target extensions for several languages, but the typemaps usually
have to be language-specific. Nonetheless, with modifications to the Python-specific typemaps, SWIG can be used to
interface a library with other languages such as Perl, Tcl, and Ruby.
My experience with SWIG has been generally positive in that it is relatively easy to use and quite powerful. I used
to use it quite often before becoming more proficient at writing C-extensions. However, I struggled writing custom
interfaces with SWIG because it must be done using the concept of typemaps which are not Python specific and are
written in a C-like syntax. Therefore, I tend to prefer other gluing strategies and would only attempt to use SWIG to
wrap a very-large C/C++ library. Nonetheless, there are others who use SWIG quite happily.
SIP
SIP is another tool for wrapping C/C++ libraries that is Python specific and appears to have very good support for
C++. Riverbank Computing developed SIP in order to create Python bindings to the QT library. An interface file must
be written to generate the binding, but the interface file looks a lot like a C/C++ header file. While SIP is not a full
C++ parser, it understands quite a bit of C++ syntax as well as its own special directives that allow modification of
how the Python binding is accomplished. It also allows the user to define mappings between Python types and C/C++
structrues and classes.
Boost Python
Boost is a repository of C++ libraries and Boost.Python is one of those libraries which provides a concise interface
for binding C++ classes and functions to Python. The amazing part of the Boost.Python approach is that it works
entirely in pure C++ without introducing a new syntax. Many users of C++ report that Boost.Python makes it possible
to combine the best of both worlds in a seamless fashion. I have not used Boost.Python because I am not a big user of
C++ and using Boost to wrap simple C-subroutines is usually over-kill. Its primary purpose is to make C++ classes
available in Python. So, if you have a set of C++ classes that need to be integrated cleanly into Python, consider
learning about and using Boost.Python.
PyFort
PyFort is a nice tool for wrapping Fortran and Fortran-like C-code into Python with support for Numeric arrays. It
was written by Paul Dubois, a distinguished computer scientist and the very first maintainer of Numeric (now retired).
It is worth mentioning in the hopes that somebody will update PyFort to work with NumPy arrays as well which now
support either Fortran or C-style contiguous arrays.
75
This is wonderful because the function writer doesnt have to manually propagate infs or nans.
76
/*
* This tells Python what methods this module has.
* See the Python-C API for more information.
*/
static PyMethodDef SpamMethods[] = {
{"logit",
spam_logit,
METH_VARARGS, "compute logit"},
{NULL, NULL, 0, NULL}
};
/*
* This actually defines the logit function for
* input args from Python.
*/
static PyObject* spam_logit(PyObject *self, PyObject *args)
{
double p;
/* This parses the Python argument into a double */
if(!PyArg_ParseTuple(args, "d", &p)) {
return NULL;
}
/* THE ACTUAL LOGIT FUNCTION */
p = p/(1-p);
p = log(p);
/*This builds the answer back into a python object */
return Py_BuildValue("d", p);
}
77
SpamMethods,
NULL,
NULL,
NULL,
NULL
};
PyMODINIT_FUNC PyInit_spam(void)
{
PyObject *m;
m = PyModule_Create(&moduledef);
if (!m) {
return NULL;
}
return m;
}
#else
PyMODINIT_FUNC initspam(void)
{
PyObject *m;
m = Py_InitModule("spam", SpamMethods);
if (m == NULL) {
return;
}
}
#endif
To use the setup.py file, place setup.py and spammodule.c in the same folder. Then python setup.py build will build
the module to import, or setup.py install will install the module to your site-packages directory.
78
setup(name = spam,
version=1.0,
description=This is my spam package,
ext_modules = [module1])
Once the spam module is imported into python, you can call logit via spam.logit. Note that the function used above
cannot be applied as-is to numpy arrays. To do so we must call numpy.vectorize on it. For example, if a python
interpreter is opened in the file containing the spam library or spam has been installed, one can perform the following
commands:
>>> import numpy as np
>>> import spam
>>> spam.logit(0)
-inf
>>> spam.logit(1)
inf
>>> spam.logit(0.5)
0.0
>>> x = np.linspace(0,1,10)
>>> spam.logit(x)
TypeError: only length-1 arrays can be converted to Python scalars
>>> f = np.vectorize(spam.logit)
>>> f(x)
array([
-inf, -2.07944154, -1.25276297, -0.69314718, -0.22314355,
0.22314355, 0.69314718, 1.25276297, 2.07944154,
inf])
THE RESULTING LOGIT FUNCTION IS NOT FAST! numpy.vectorize simply loops over spam.logit. The loop is
done at the C level, but the numpy array is constantly being parsed and build back up. This is expensive. When the
author compared numpy.vectorize(spam.logit) against the logit ufuncs constructed below, the logit ufuncs were almost
exactly 4 times faster. Larger or smaller speedups are, of course, possible depending on the nature of the function.
"Python.h"
"math.h"
"numpy/ndarraytypes.h"
"numpy/ufuncobject.h"
"numpy/npy_3kcompat.h"
single_type_logit.c
This is the C code for creating your own
Numpy ufunc for a logit function.
In this code we only define the ufunc for
a single dtype. The computations that must
be replaced to create a ufunc for
a different funciton are marked with BEGIN
and END.
79
80
if (!m) {
return NULL;
}
import_array();
import_umath();
logit = PyUFunc_FromFuncAndData(funcs, data, types, 1, 1, 1,
PyUFunc_None, "logit",
"logit_docstring", 0);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "logit", logit);
Py_DECREF(logit);
return m;
}
#else
PyMODINIT_FUNC initnpufunc(void)
{
PyObject *m, *logit, *d;
m = Py_InitModule("npufunc", LogitMethods);
if (m == NULL) {
return;
}
import_array();
import_umath();
logit = PyUFunc_FromFuncAndData(funcs, data, types, 1, 1, 1,
PyUFunc_None, "logit",
"logit_docstring", 0);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "logit", logit);
Py_DECREF(logit);
}
#endif
This is a setup.py file for the above code. As before, the module can be build via calling python setup.py build at the
command prompt, or installed to site-packages via python setup.py install.
81
After the above has been installed, it can be imported and used as follows.
>>> import numpy as np
>>> import npufunc
>>> npufunc.logit(0.5)
0.0
>>> a = np.linspace(0,1,5)
>>> npufunc.logit(a)
array([
-inf, -1.09861229,
0.
1.09861229,
inf])
"Python.h"
"math.h"
"numpy/ndarraytypes.h"
"numpy/ufuncobject.h"
"numpy/halffloat.h"
/*
* multi_type_logit.c
82
83
84
m = Py_InitModule("npufunc", LogitMethods);
if (m == NULL) {
return;
}
85
import_array();
import_umath();
logit = PyUFunc_FromFuncAndData(funcs, data, types, 4, 1, 1,
PyUFunc_None, "logit",
"logit_docstring", 0);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "logit", logit);
Py_DECREF(logit);
}
#endif
This is a setup.py file for the above code. As before, the module can be build via calling python setup.py build at the
command prompt, or installed to site-packages via python setup.py install.
86
return config
if __name__ == "__main__":
from numpy.distutils.core import setup
setup(configuration=configuration)
After the above has been installed, it can be imported and used as follows.
>>> import numpy as np
>>> import npufunc
>>> npufunc.logit(0.5)
0.0
>>> a = np.linspace(0,1,5)
>>> npufunc.logit(a)
array([
-inf, -1.09861229,
0.
1.09861229,
inf])
is replaced with
config.add_extension(npufunc, [multi_arg_logit.c])
The C file is given below. The ufunc generated takes two arguments A and B. It returns a tuple whose first element
is A*B and whose second element is logit(A*B). Note that it automatically supports broadcasting, as well as all other
properties of a ufunc.
#include
#include
#include
#include
#include
"Python.h"
"math.h"
"numpy/ndarraytypes.h"
"numpy/ufuncobject.h"
"numpy/halffloat.h"
/*
* multi_arg_logit.c
* This is the C code for creating your own
* Numpy ufunc for a multiple argument, multiple
* return value ufunc. The places where the
* ufunc computation is carried out are marked
* with comments.
*
* Details explaining the Python-C API can be found under
* Extending and Embedding and Python/C API at
* docs.python.org .
*
*/
87
};
/* The loop definition must precede the PyMODINIT_FUNC. */
static void double_logitprod(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)
{
npy_intp i;
npy_intp n = dimensions[0];
char *in1 = args[0], *in2 = args[1];
char *out1 = args[2], *out2 = args[3];
npy_intp in1_step = steps[0], in2_step = steps[1];
npy_intp out1_step = steps[2], out2_step = steps[3];
double tmp;
for (i = 0; i < n; i++) {
/*BEGIN main ufunc computation*/
tmp = *(double *)in1;
tmp *= *(double *)in2;
*((double *)out1) = tmp;
*((double *)out2) = log(tmp/(1-tmp));
/*END main ufunc computation*/
in1 += in1_step;
in2 += in2_step;
out1 += out1_step;
out2 += out2_step;
}
}
88
m = Py_InitModule("npufunc", LogitMethods);
if (m == NULL) {
return;
}
import_array();
import_umath();
logit = PyUFunc_FromFuncAndData(funcs, data, types, 1, 2, 2,
PyUFunc_None, "logit",
"logit_docstring", 0);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "logit", logit);
Py_DECREF(logit);
}
#endif
89
is replaced with
config.add_extension(npufunc, [add_triplet.c])
"Python.h"
"math.h"
"numpy/ndarraytypes.h"
"numpy/ufuncobject.h"
"numpy/npy_3kcompat.h"
/*
* add_triplet.c
* This is the C code for creating your own
* Numpy ufunc for a structured array dtype.
*
* Details explaining the Python-C API can be found under
* Extending and Embedding and Python/C API at
* docs.python.org .
*/
static PyMethodDef StructUfuncTestMethods[] = {
{NULL, NULL, 0, NULL}
};
/* The loop definition must precede the PyMODINIT_FUNC. */
static void add_uint64_triplet(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)
{
npy_intp i;
npy_intp is1=steps[0];
npy_intp is2=steps[1];
npy_intp os=steps[2];
npy_intp n=dimensions[0];
uint64_t *x, *y, *z;
char *i1=args[0];
char *i2=args[1];
char *op=args[2];
for (i = 0; i < n; i++) {
x = (uint64_t*)i1;
y = (uint64_t*)i2;
z = (uint64_t*)op;
z[0] = x[0] + y[0];
z[1] = x[1] + y[1];
z[2] = x[2] + y[2];
i1 += is1;
i2 += is2;
op += os;
}
}
90
91
Py_DECREF(dtype_dict);
dtypes[0] = dtype;
dtypes[1] = dtype;
dtypes[2] = dtype;
/* Register ufunc for structured dtype */
PyUFunc_RegisterLoopForDescr(add_triplet,
dtype,
&add_uint64_triplet,
dtypes,
NULL);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "add_triplet", add_triplet);
Py_DECREF(add_triplet);
#if defined(NPY_PY3K)
return m;
#endif
}
92
Arbitrary data (extra arguments, function names, etc. ) that can be stored with the ufunc
and will be passed in when it is called.
static void
double_add(char *args, npy_intp *dimensions, npy_intp *steps,
void *extra)
{
npy_intp i;
npy_intp is1 = steps[0], is2 = steps[1];
npy_intp os = steps[2], n = dimensions[0];
char *i1 = args[0], *i2 = args[1], *op = args[2];
for (i = 0; i < n; i++) {
*((double *)op) = *((double *)i1) +
*((double *)i2);
i1 += is1;
i2 += is2;
op += os;
}
}
data
An array of data. There should be ntypes entries (or NULL) one for every loop function defined
for this ufunc. This data will be passed in to the 1-d loop. One common use of this data variable is to
pass in an actual function to call to compute the result when a generic 1-d loop (e.g. PyUFunc_d_d)
is being used.
types
An array of type-number signatures (type char ). This array should be of size (nin+nout)*ntypes
and contain the data-types for the corresponding 1-d loop. The inputs should be first followed by the
outputs. For example, suppose I have a ufunc that supports 1 integer and 1 double 1-d loop (length-2
func and data arrays) that takes 2 inputs and returns 1 output that is always a complex double, then
the types array would be
static char types[3] = {NPY_INT, NPY_DOUBLE, NPY_CDOUBLE}
The bit-width names can also be used (e.g. NPY_INT32, NPY_COMPLEX128 ) if desired.
ntypes
The number of data-types supported. This is equal to the number of 1-d loops provided.
nin
The number of input arguments.
nout
The number of output arguments.
identity
Either PyUFunc_One, PyUFunc_Zero, PyUFunc_None. This specifies what should be returned when an empty array is passed to the reduce method of the ufunc.
name
A NULL -terminated string providing the name of this ufunc (should be the Python name it will be
called).
doc
93
A documentation string for this ufunc (will be used in generating the response to
{ufunc_name}.__doc__). Do not include the function signature or the name as this is generated automatically.
check_return
Not presently used, but this integer value does get set in the structure-member of similar name.
The returned ufunc object is a callable Python object. It should be placed in a (module) dictionary under the same
name as was used in the name argument to the ufunc-creation routine. The following example is adapted from the
umath module
static PyUFuncGenericFunction atan2_functions[] = {
PyUFunc_ff_f, PyUFunc_dd_d,
PyUFunc_gg_g, PyUFunc_OO_O_method};
static void* atan2_data[] = {
(void *)atan2f,(void *) atan2,
(void *)atan2l,(void *)"arctan2"};
static char atan2_signatures[] = {
NPY_FLOAT, NPY_FLOAT, NPY_FLOAT,
NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE,
NPY_LONGDOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE
NPY_OBJECT, NPY_OBJECT, NPY_OBJECT};
...
/* in the module initialization code */
PyObject *f, *dict, *module;
...
dict = PyModule_GetDict(module);
...
f = PyUFunc_FromFuncAndData(atan2_functions,
atan2_data, atan2_signatures, 4, 2, 1,
PyUFunc_None, "arctan2",
"a safe and correct arctan(x1/x2)", 0);
PyDict_SetItemString(dict, "arctan2", f);
Py_DECREF(f);
...
Discovery is seeing what everyone else has seen and thinking what no
one else has thought.
Albert Szent-Gyorgi
the number of dimensions you will be using, then you can always write nested for loops to accomplish the iteration.
If, however, you want to write code that works with any number of dimensions, then you can make use of the array
iterator. An array iterator object is returned when accessing the .flat attribute of an array.
Basic usage is to call PyArray_IterNew ( array ) where array is an ndarray object (or one of its sub-classes).
The returned object is an array-iterator object (the same object returned by the .flat attribute of the ndarray). This
object is usually cast to PyArrayIterObject* so that its members can be accessed. The only members that are needed
are iter->size which contains the total size of the array, iter->index, which contains the current 1-d index
into the array, and iter->dataptr which is a pointer to the data for the current element of the array. Sometimes it
is also useful to access iter->ao which is a pointer to the underlying ndarray object.
After processing data at the current element of the array, the next element of the array can be obtained using the macro
PyArray_ITER_NEXT ( iter ). The iteration always proceeds in a C-style contiguous fashion (last index varying
the fastest). The PyArray_ITER_GOTO ( iter, destination ) can be used to jump to a particular point in the
array, where destination is an array of npy_intp data-type with space to handle at least the number of dimensions
in the underlying array. Occasionally it is useful to use PyArray_ITER_GOTO1D ( iter, index ) which will
jump to the 1-d index given by the value of index. The most common usage, however, is given in the following
example.
PyObject *obj; /* assumed to be some ndarray object */
PyArrayIterObject *iter;
...
iter = (PyArrayIterObject *)PyArray_IterNew(obj);
if (iter == NULL) goto fail;
/* Assume fail has clean-up code */
while (iter->index < iter->size) {
/* do something with the data at it->dataptr */
PyArray_ITER_NEXT(it);
}
...
You can also use PyArrayIter_Check ( obj ) to ensure you have an iterator object and PyArray_ITER_RESET
( iter ) to reset an iterator object back to the beginning of the array.
It should be emphasized at this point that you may not need the array iterator if your array is already contiguous (using
an array iterator will work but will be slower than the fastest code you could write). The major purpose of array
iterators is to encapsulate iteration over N-dimensional arrays with arbitrary strides. They are used in many, many
places in the NumPy source code itself. If you already know your array is contiguous (Fortran or C), then simply
adding the element- size to a running pointer variable will step you through the array very efficiently. In other words,
code like this will probably be faster for you in the contiguous case (assuming doubles).
npy_intp size;
double *dptr; /* could make this any variable type */
size = PyArray_SIZE(obj);
dptr = PyArray_DATA(obj);
while(size--) {
/* do something with the data at dptr */
dptr++;
}
95
be advantageous to perform the inner loop over the dimension with the highest number of elements to take advantage
of speed enhancements available on micro- processors that use pipelining to enhance fundmental operations.
The PyArray_IterAllButAxis ( array, &dim ) constructs an iterator object that is modified so that it
will not iterate over the dimension indicated by dim. The only restriction on this iterator object, is that the
PyArray_Iter_GOTO1D ( it, ind ) macro cannot be used (thus flat indexing wont work either if you pass
this object back to Python so you shouldnt do this). Note that the returned object from this routine is still usually
cast to PyArrayIterObject *. All thats been done is to modify the strides and dimensions of the returned iterator to
simulate iterating over array[...,0,...] where 0 is placed on the dimth dimension. If dim is negative, then the dimension
with the largest axis is found and used.
Iterating over multiple arrays
Very often, it is desireable to iterate over several arrays at the same time. The universal functions are an example of
this kind of behavior. If all you want to do is iterate over arrays with the same shape, then simply creating several
iterator objects is the standard procedure. For example, the following code iterates over two arrays assumed to be the
same shape and size (actually obj1 just has to have at least as many total elements as does obj2):
/* It is already assumed that obj1 and obj2
are ndarrays of the same shape and size.
*/
iter1 = (PyArrayIterObject *)PyArray_IterNew(obj1);
if (iter1 == NULL) goto fail;
iter2 = (PyArrayIterObject *)PyArray_IterNew(obj2);
if (iter2 == NULL) goto fail; /* assume iter1 is DECREFd at fail */
while (iter2->index < iter2->size) {
/* process with iter1->dataptr and iter2->dataptr */
PyArray_ITER_NEXT(iter1);
PyArray_ITER_NEXT(iter2);
}
The function PyArray_RemoveSmallest ( multi ) can be used to take a multi-iterator object and adjust all the
iterators so that iteration does not take place over the largest dimension (it makes that dimension of size 1). The code
96
being looped over that makes use of the pointers will very-likely also need the strides data for each of the iterators.
This information is stored in multi->iters[i]->strides.
There are several examples of using the multi-iterator in the NumPy source code as it makes N-dimensional
broadcasting-code very simple to write. Browse the source for more examples.
After you have defined a new Python type object, you must then define a new PyArray_Descr structure whose
typeobject member will contain a pointer to the data-type youve just defined. In addition, the required functions in the
.f member must be defined: nonzero, copyswap, copyswapn, setitem, getitem, and cast. The more functions in the
.f member you define, however, the more useful the new data-type will be. It is very important to intialize unused
functions to NULL. This can be achieved using PyArray_InitArrFuncs (f).
Once a new PyArray_Descr structure is created and filled with the needed information and useful functions you
call PyArray_RegisterDataType (new_descr). The return value from this call is an integer providing you
with a unique type_number that specifies your data-type. This type number should be stored and made available
by your module so that other modules can use it to recognize your data-type (the other mechanism for finding a
user-defined data-type number is to search based on the name of the type-object associated with the data-type using
PyArray_TypeNumFromName ).
Registering a casting function
You may want to allow builtin (and other user-defined) data-types to be cast automatically to your data-type. In
order to make this possible, you must register a casting function with the data-type you want to be able to cast from.
This requires writing low-level casting functions for each conversion you want to support and then registering these
functions with the data-type descriptor. A low-level casting function has the signature.
97
This could then be registered to convert doubles to floats using the code:
doub = PyArray_DescrFromType(NPY_DOUBLE);
PyArray_RegisterCastFunc(doub, NPY_FLOAT,
(PyArray_VectorUnaryFunc *)double_to_float);
Py_DECREF(doub);
98
usertype
The user-defined type this loop should be indexed under. This number must be a user-defined type or
an error occurs.
function
The ufunc inner 1-d loop. This function must have the signature as explained in Section 3 .
arg_types
(optional) If given, this should contain an array of integers of at least size ufunc.nargs containing the
data-types expected by the loop function. The data will be copied into a NumPy-managed structure
so the memory for this argument should be deleted after calling this function. If this is NULL, then
it will be assumed that all data-types are of type usertype.
data
(optional) Specify any optional data needed by the function which will be passed when the function
is called.
99
Notice that the full PyArrayObject is used as the first entry in order to ensure that the binary layout of instances
of the new type is identical to the PyArrayObject.
2. Fill in a new Python type-object structure with pointers to new functions that will over-ride the default behavior
while leaving any function that should remain the same unfilled (or NULL). The tp_name element should be
different.
3. Fill in the tp_base member of the new type-object structure with a pointer to the (main) parent type object. For
multiple-inheritance, also fill in the tp_bases member with a tuple containing all of the parent objects in the
order they should be used to define inheritance. Remember, all parent-types must have the same C-structure for
multiple inheritance to work properly.
4. Call PyType_Ready (<pointer_to_new_type>). If this function returns a negative number, a failure occurred
and the type is not initialized. Otherwise, the type is ready to be used. It is generally important to place a
reference to the new type into the module dictionary so it can be accessed from Python.
More information on creating sub-types in C can be learned by reading PEP 253 (available at
http://www.python.org/dev/peps/pep-0253).
Specific features of ndarray sub-typing
Some special methods and attributes are used by arrays in order to facilitate the interoperation of sub-types with the
base ndarray type.
The __array_finalize__ method
ndarray.__array_finalize__
Several array-creation functions of the ndarray allow specification of a particular sub-type to be created. This
allows sub-types to be handled seamlessly in many routines. When a sub-type is created in such a fashion,
however, neither the __new__ method nor the __init__ method gets called. Instead, the sub-type is allocated
and the appropriate instance-structure members are filled in. Finally, the __array_finalize__ attribute
is looked-up in the object dictionary. If it is present and not None, then it can be either a CObject containing
a pointer to a PyArray_FinalizeFunc or it can be a method taking a single argument (which could be
None).
If the __array_finalize__ attribute is a CObject, then the pointer must be a pointer to a function with the
signature:
(int) (PyArrayObject *, PyObject *)
The first argument is the newly created sub-type. The second argument (if not NULL) is the parent array (if
the array was created using slicing or some other operation where a clearly-distinguishable parent is present).
This routine can do anything it wants to. It should return a -1 on error and 0 otherwise.
If the __array_finalize__ attribute is not None nor a CObject, then it must be a Python method that takes
the parent array as an argument (which could be None if there is no parent), and returns nothing. Errors in this
method will be caught and handled.
The __array_priority__ attribute
ndarray.__array_priority__
This attribute allows simple but flexible determination of which sub- type should be considered primary when
an operation involving two or more sub-types arises. In operations where different sub-types are being used,
the sub-type with the largest __array_priority__ attribute will determine the sub-type of the output(s).
If two sub- types have the same __array_priority__ then the sub-type of the first argument determines
the output. The default __array_priority__ attribute returns a value of 0.0 for the base ndarray type and
1.0 for a sub-type. This attribute can also be defined by objects that are not sub-types of the ndarray and can be
used to determine which __array_wrap__ method should be called for the return output.
100
101
102
INDEX
Symbols
__array_finalize__ (ndarray attribute), 100
__array_priority__ (ndarray attribute), 100
__array_wrap__ (ndarray attribute), 101
A
adding new
dtype, 97
ufunc, 76, 79, 82, 94
array iterator, 95, 97
B
Boost.Python, 75
broadcasting, 96
C
ctypes, 69, 74
cython, 66, 69
numpy.doc.byteswapping (module), 29
numpy.doc.creation (module), 11
numpy.doc.howtofind (module), 7
numpy.doc.indexing (module), 20
numpy.doc.methods_vs_functions (module), 51
numpy.doc.misc (module), 47
numpy.doc.performance (module), 45
numpy.doc.structured_arrays (module), 31
numpy.doc.subclassing (module), 36
P
PyArray_FROM_OTF (C function), 57
PyArray_SimpleNew (C function), 59
PyArray_SimpleNewFromData (C function), 59
PyModule_AddIntConstant (C function), 54
PyModule_AddObject (C function), 54
PyModule_AddStringConstant (C function), 54
reference counting, 56
dtype
adding new, 97
E
extension module, 53, 60
F
f2py, 62, 66
SIP, 75
subtyping
ndarray, 99, 101
swig, 75
U
ufunc
adding new, 76, 79, 82, 94
ndarray
subtyping, 99, 101
ndpointer() (built-in function), 71
NPY_ARRAY_ENSUREARRAY (C variable), 58
NPY_ARRAY_ENSURECOPY (C variable), 58
NPY_ARRAY_FORCECAST (C variable), 58
NPY_ARRAY_IN_ARRAY (C variable), 58
NPY_ARRAY_INOUT_ARRAY (C variable), 58
NPY_ARRAY_OUT_ARRAY (C variable), 58
numpy.doc.basics (module), 9
numpy.doc.broadcasting (module), 26
103