(Should be read prior to studying chapter 4)
Our programs in the first few chapters of the book make heavy use of Python lists. As a matter of fact, we typically go one step farther and opt for list comprehensions, whenever that is possible. After some introductory discussion, all of these Python lists are "homogeneous", in the sense that they contain elements of only one type. Furthermore, since we use list comprehensions so often, it's easy to see that the list size is always known ahead of time, i.e., there is no need to keep appending, until some condition is met (or, to be fair, we structure our codes so that this is so). In this context, here’s an example of a common operation one could carry out:
tots = [w*f(x) for w,x in zip(ws,xs)]
namely, stepping through one list (or more) and applying a function or multiplying by a coefficient, which may be different for each item. Here, we are essentially carrying out the same operation over and over again, a fixed number of times. At the end of that process, we typically sum up all the elements in the list.
This raises the natural question: wouldn’t it make more sense to carry out such tasks using a homogeneous, fixed-length container? This is precisely what the Numerical Python array object does: as a result, it is fast and space-efficient. In addition to this, it allows us to avoid, for the most part, having to write loops: even in our listcomps, which are more concise than standard loops, there is a need to explicitly step through each element one by one. Numerical Python arrays often obviate such syntax, allowing us to carry out mathematical operations on entire blocks of data in one step.
Linear algebra applications, in particular, which entail heavy use of fixed-length, homogeneous entities (namely, mathematical matrices) can benefit from numpy arrays, which is why we make a slight detour here, to introduce them. In standard Python, matrices can be represented by lists of lists which are, however, quite clunky, in that their syntax is different from the mathematical notation you are already used to. Before we tackle the two-dimensional case, which will be of interest in chapter 4, we first go over one-dimensional numpy arrays, which are direct replacements for Python lists, like the tots mentioned above. From here onward, we will not keep referring to these new containers as numpy arrays: they’ll simply be called arrays, the same way the core-Python lists are simply called lists.
We start in the usual way, with a Python list:
L = [11, 7, 19, 22, 55]
The next step is to import numpy functionality which, as you know, we can do in any one of a number of ways. We here employ the standard convention of importing numpy with a shortened name:
import numpy as np
The easiest way to make an array is to employ the array() function, which takes in a sequence-like entity and returns an array containing the data that was passed in. In our example, with the list L, we have:
xs = np.array(L)
xs
Notice that the array() function is part of numpy, so we had to say np.array() to access it. This example clearly shows the Python interpreter understanding that xs is not a list, but an array object (Note that there is both an array() function and an array object involved here: the former created the latter). It’s worth highlighting that when you print out lists and arrays, the output is slightly different:
print(xs)
print(L)
Obviously, a print-out of an array doesn’t show the commas: this is quite helpful when you need to focus on the numbers themselves, which is the task arrays were designed for. If you want to see how many elements are to be found in the array xs, you might be tempted to reach for the core-Python built-in function len(): this may cause headaches later on, so it’s best to use instead the size attribute of the array xs:
xs.size
Remember: numpy arrays are fixed-length, so the total number of elements cannot change. In other words, there is no append() method for arrays (thus, their size is fixed, whereas for a list len(L) would return different values before and after we append/pop an element).
Another very useful attribute arrays have is dtype, namely the data type of the elements:
xs.dtype
You can immediately see that while we did not explicitly provide a data type for the elements of xs, this was inferred from the elements themselves (remember: numpy arrays are homogeneous, meaning all elements have the same type). You can also see that the type returned is not the standard Python integer, but int64: the number tells us how many bits are used to store this integer in this specific representation (64 bits make up what most languages call a “long integer”). You can also use Python’s type() to check the different types:
type(L)
type(L[0])
type(xs)
type(xs[0])
which clearly shows that we are dealing here with distinct types: the numpy types directly map onto machine representations.
As you can imagine, we can also have arrays containing floats:
ys = np.array([1.1, 2.2, 3.3, 4.4])
ys.dtype
where we took the opportunity to place the list that is used to create the array directly as an argument to the array() function. Here, the 64 informs us that we are dealing with a double-precision float (see chapter 2 for more details). In our examples above, namely xs and ys, we allowed numpy to infer the type of argument on its own. However, we can also provide the data type as an (optional) argument to array():
zs = np.array([5, 6, 7, 8], dtype=np.float32)
zs
zs.dtype
Here we took the opportunity to use a list of integers and have numpy carry out the promotion to (single-precision) floats. Thus, numpy provides us with fine-grained control over which machine presentation we want to use: especially for floats, this can have major practical consequences and constitutes one of the major selling points of numpy arrays.
So far, we’ve only seen how to create numpy arrays through the use of a (throwaway) list. For example, if we wanted an array containing a fixed number of zeros, we would do something like:
np.array([0.]*5)
whereas in order to produce an array containing a fixed number of ones we would say:
np.array([1.]*4)
These use cases are sufficiently common that numpy contains functions to generate such arrays directly:
np.zeros(5)
np.ones(4)
where, as you can see, we passed in as an argument the total number of elements each time.
Another very useful numpy function that generates arrays is arange() which is a generalization of the core-Python range():
np.arange(10,16)
Obviously, you could also provide a stride, as a third argument to arange(), just like you did for range(). At this point, those who are new to numpy often find out that arange() can also take in floats as arguments:
np.arange(10.,16.)
However, in chapter 2 we will learn that this invites trouble. Here are two examples showing how things can go wrong:
np.arange(1.5,1.75,0.05)
np.arange(1.5,1.8,0.05)
Floats (and floating-point operations) are not stored (carried out) using infinite precision. As a result, in the first case the endpoint is excluded but in the second case it is included. This can have dramatic consequences: take numerical integration as an example. When applying such formulas it is very important to apply, say, a closed method properly (i.e., include both endpoints). The numpy documentation happens to be very explicit on this: “When using a non-integer step, such as 0.1, the results will often not be consistent”. You might be tempted to solve this problem by a one-off solution like:
np.arange(1.5,1.75001,0.05)
This is a bad idea. Instead, it is much better to use the linspace() function, which returns a specified number of evenly spaced elements over a specified interval, for example:
np.linspace(1.5,1.75,6)
np.linspace(1.5,1.8,7)
Note that this provides consistent behavior, at the cost of having to provide the desired number of elements ourselves. You may be wondering why the linspace() function has this name. The answer will become obvious once we point out that numpy also has a function called logspace(). This produces a logarithmically spaced grid of points (the default behavior is to use a base of 10). For example, if we wanted 5 points logarithmically distributed from $10^{−1}$ to $10^{−5}$, we would say:
np.logspace(-1, -5, 5)
The first argument is the exponent of the starting value, the second argument is the exponent of the final value, and the third argument is the number of samples to generate. Obviously, the grid of points that linspace() produces is linear. Remember: both linspace() and logspace() include the two provided endpoints.
The two differences between lists and arrays we’ve encountered so far (i.e., arrays’ being fixed-length and homogeneous) may appear to you not to be too important. After all, one can create a list that never grows/shrinks and is made up of elements of the same type. We will now see two more major differences, the first having to do with how slicing works and the second one relating to how calculations are propagated to the different elements. We take up each one of these in its own subsection.
Indexing for arrays works in exactly the same way as for lists, namely with square brackets:
xs = np.arange(10,16)
print(xs)
xs[2]
Likewise, slicing appears, at first sight, to be identical to how slicing of lists works:
xs = np.arange(10,16)
print(xs[2:4])
We could keep adding examples, e.g. xs[-1] accesses the last element in the array, xs[1:5:2] employs a stride, and so on. However, there is a crucial difference, namely that array slices are views on the original array:
r = np.array([11,7,19,22])
print(r)
sli = r[1:3]
print(sli)
sli[0] = 55
print(sli)
print(r)
This example happens to be exactly the same as what we did in the Python tutorial for the case of lists (which is why we are temporarily reverting back to an older variable naming convention). As you may recall from there, slices of lists gave de facto copies, so the original list was unaffected by subsequent changes to the slice. This behavior of slices led us to use L[:] to get a copy of the list L. Numpy arrays are efficient, eliminating the need to copy data: of course, one should always be mindful that different slices all refer to the same underlying array. For the few cases where you do need a true copy, you can use numpy.copy():
r = np.array([11,7,19,22])
sli = np.copy(r[1:3])
sli[0] = 55
print(sli)
print(r)
It is easy to see that you could, just as well, copy the entire array over, without impacting the original:
r2 = np.copy(r)
r2[1] = 0
print(r2)
print(r)
In the main text of the book, we frequently make copies of arrays inside functions, in the spirit of impacting the external world only through the return value of a function. That being said, this is only a pedagogical choice: in practice, the performance is much better if one modifies an array “in place”, thereby destroying the original array.
Another difference between lists and arrays, expanded on in the coming subsection has to with broadcasting. Qualitatively, this means that numpy often knows how to handle entities whose dimensionalities don’t quite match. Here’s an example:
r = np.array([11, 7, 19, 22])
r[1:3] = 55
print(r)
As it so happens, this is an example that we tried in the Python tutorial using lists: there, we received a TypeError, since Python didn’t know how to handle a single number on the right-hand-side, when the left-hand-side needed at least two numbers (i.e., Python expected an iterable on the right-hand-side). For the case of arrays, instead, the one value on the right-hand-side is broadcast to as many slots as necessary.
Since we’ve now gone over slicing and how one value can be broadcast to several slots, this is a good place to see a very common numpy array idiom, which uses an “everything slice”, namely xs[:]. Remember, since array slices are views on the original array, xs[:] cannot be used to create a copy of the array. However, this syntax does have a nice use as shown below:
xs = np.arange(10,16)
print(xs)
xs[:] = 7
print(xs)
We used the everything-slice to broadcast the number 7 onto all slots of a pre-existing array. You might be wondering why we need this everything-slice to accomplish this. The answer should be clear if you see that:
xs = 7
print(xs)
If you don’t use an everything-slice, Python thinks you are creating a new integer variable which just happens to be called xs, whereas that was (almost certainly) not your intention. We’ve already seen some of the new syntax allowed by arrays. The following section discusses such array operations in greater detail, highlighting their strengths.
When introducing arrays above, we mentioned that they allow us to avoid, for the most part, having to write loops. It is now time to see what this means:
xs = np.arange(10,16)
ys = np.arange(20,26)
print(xs)
print(ys)
print(xs + ys)
We see that for numpy arrays addition is interpreted as an elementwise operation (i.e., each pair of elements is added together). This ability to carry out such batch operations on all the elements of an array (or two) at once is often called vectorization. Note how different this is from the corresponding behavior of Python lists:
xs = list(range(10,16))
ys = list(range(20,26))
xs
ys
xs+ys
news = [x+y for x,y in zip(xs,ys)]
news
For lists + concatenates instead of leading to elementwise addition. Thus, in order to produce the same result that arrays give using a simple +, with lists we have to use a list comprehension as well as zip().
Coming back to numpy arrays, multiplication is also interpreted to be an elementwise operation (and the same holds for subtraction, division, and exponentiation):
xs = np.arange(10,16)
ys = np.arange(20,26)
print(xs*ys)
np.sum(xs*ys)
Here we also took the opportunity to introduce numpy.sum(), which sums up all the elements of a numpy array. Thus, the last line is simply evaluating the scalar product of two vectors, in one short line. You may be wondering if you could have used, instead, the sum() built-in that Python provides. The answer is that, while both options are legal, in practice numpy functions are dramatically faster, as you will explore in a problem. The takeaway here is that you should employ numpy functionality whenever possible.
Equally interesting to the elementwise operations between two arrays, is the ability numpy provides us to also combine an array with a scalar: this follows from the aforementioned broadcasting, whereby numpy knows how to interpret entities with different (but compatible) shapes. Here’s an example:
xs = np.arange(10,16)
print(xs)
print(2*xs)
You may recall that multiplication of a list with a scalar led to very different results (as a matter of fact, we used that property earlier in this section when we said [0.]*5). Instead, numpy knows how to combine a scalar with an array, to perform the multiplication for all array elements (so this is one more case where you don’t need a list comprehension). You can think of what’s happening here as the value 2 being “stretched” into an array with the same size as xs and then an elementwise multiplication taking place. Needless to say, you could carry out several other operations with scalars, for example:
xs = np.arange(10,16)
print(1/xs)
Such combinations of broadcasting (whereby you can carry out operations between scalars and arrays) and vectorization (whereby you write one expression but the calculation is carried out for all elements) can be hard to grasp at first, but are very powerful (both expressive and efficient) once you get used to them.
In addition to knowing how to perform elementwise operations between arrays (or between arrays and scalars), numpy also contains several functions that look similar to corresponding math functions: for example, numpy.sqrt(), numpy.exp(), numpy.log(), numpy.sin() do what you expect them to do. The crucial difference from the corresponding math functions is that such numpy functions are designed to take in entire arrays so they are almost always faster. Other numpy functions may be new to you, but can turn out to be very useful, e.g. numpy.isfinite() or numpy.sort(). Note that all the functions we’ve mentioned so far in this paragraph take in an array and return an array. Numpy also has functions that take in an array and return a scalar, like the numpy.sum() we encountered above. These are reasonably named, e.g. numpy.prod() calculates the product of all element values and numpy.amin() returns the element with the smallest value. Another very helpful function is numpy.argmax(), which returns the index of the maximum value. Finally, numpy contains an entire module, numpy.random, with several very useful and optimized functions related to random numbers. It’s worth spending some time with the numpy documentation to familiarize yourself with such functions, since they are intuitive and efficient.
We can also use the functionality included in numpy to create our own functions that can handle arrays. For example, the following:
def fa(xs):
return 1/np.sqrt(xs**2+1)
xs = np.arange(10,16)
print(fa(xs))
We employed numpy.sqrt(), which knows how to handle arrays as arguments; the squaring and the addition of 1 were handled by numpy on its own (i.e., interpreted as referring to entire arrays as well). We are now ready to re-implement the tots shown at the start of this section. Instead of the following solution which employs lists,
import math
def f(x):
return 1/math.sqrt(x**2+1)
ws = list(range(30,36))
xs = list(range(10,16))
tots = [w*f(x) for w,x in zip(ws,xs)]
print(tots)
our new solution, employing arrays, is simply:
def fa(xs):
return 1/np.sqrt(xs**2+1)
ws = np.arange(30,36)
xs = np.arange(10,16)
tots = ws*fa(xs)
print(tots)
Note how:
tots = [w*f(x) for w,x in zip(ws,xs)]
was replaced by:
tots = ws*fa(xs)
Once again, once you get used to this way of doing things, you will see how powerful and convenient it is to carry out such array operations.
The numpy arrays we encountered in the previous section were all replacements for regular Python lists. In Python we could also introduce lists-of-lists, for example:
LL = [[11,12,13,14], [15,16,17,18], [19,20,21,22]]
As you might imagine, we can employ the array() function to produce an array that contains the elements in LL. We have:
A = np.array(LL)
A
This example clearly shows the Python interpreter understanding that A is not a list, but an array object. Intriguingly, the A entity is already being output over three rows, making it easy to see that it’s a two-dimensional entity, similar to a mathematical matrix. As a matter of fact, using print() to output it makes this fact even more obvious:
print(A)
which is to be compared with how Python outputs a list of lists:
print(LL)
In addition to splitting the three rows across three rows, a print-out of an array doesn’t show the commas, allowing us to focus on the numbers themselves.
A lot of what we saw on how to create one-dimensional arrays carries over to the two-dimensional case. This applies to using type() for investigating array or array element types. It also applies to specifically creating arrays with the desired type, for example:
B = np.array(LL,dtype=np.float64)
print(B)
B.dtype
where we explicitly made sure we are creating an array of floats. We also took the opportunity to recall that all arrays have an attribute called dtype which refers to the type of the array elements (remember: all elements of an array have the same type).
You may also remember that we used the size array attribute for the one-dimensional case, in order to see how many elements are present. It works equally well for the two-dimensional case:
A.size
Obviously, this counts up the total number of elements, including both rows and columns.
Another attribute, one we didn’t touch upon before, is ndim, which tells us the dimensionality of the numpy array:
A.ndim
The number of dimensions can be thought of as the number of distinct “axes” according to which we are listing elements. Incidentally, our terminology may be confusing to those coming from a linear-algebra background. In linear algebra, we say that a matrix A with m rows and n columns has “dimensions” m and n, or sometimes that it has dimensions m × n. In contradistinction to this, the numpy convention is to refer to an array like A as having “dimension 2”, since it’s made up of rows and columns (i.e., how many elements are in each row and in each column doesn’t matter). In our example of A, the dimension is 2, even though there are 3 rows and 4 columns.
The next obvious question is whether we can access the number of rows and the number of columns somehow (remember: size contains the product of the two). The answer, unsurprisingly, is yes. There exists yet another attribute, shape, which returns a tuple containing the number of elements in each dimension, for example:
A.shape
It's reassuring that $3 \times 4 = 12$. As you should immediately check, a one-dimensional array would return 1 for ndim. Similarly, its shape attribute would be a tuple with one element, e.g., (7,), where the number is equal (for the one-dimensional case) to the total size of the array. The attribute we will spend the most time on in what follows is shape.
So far we’ve only seen one way of creating a two-dimensional numpy array: pass in a list of lists to array(). You might not be surprised to hear that all the functions that we saw earlier being applied to one-dimensional arrays also work for two-dimensional ones. For example, we can use zeros() to create a two-dimensional array full of zeros. Instead of passing in one number, this time we will pass in a tuple, corresponding to the desired shape of the new array:
np.zeros((2,3))
Note that we have one pair of parentheses to denote the function zeros() and another pair of parentheses for the tuple containing the shape. Similarly, we can use ones() to create a two-dimensional array full of ones. This time, let’s do that for a square matrix:
np.ones((3,3))
Once again, we passed in a tuple for the shape. There also exists a very important matrix consisting of zeros and ones in a specific combination: we are referring to the identity, which is by definition a square matrix. Nicely enough, there exists a numpy function called identity(), which produces precisely what we want:
np.identity(3)
It’s easy to notice that identity() takes in a single argument, since it’s designed to produce square matrices. That’s another way of saying that functions like zeros() or ones() can also produce three-dimensional, four-dimensional, etc arrays, though we won’t have any use for those. Note that all of these functions produce floats, by default.
Let’s turn back, for a second, to the case of passing in a shape parameter, when trying to create a new array function. You may have noticed that defining a list-of-lists in Python, such as our LL above, is a bit “clunky”: one needs double square brackets, as well as several commas, appropriately interspersed. As a matter of fact, as we’ll see in the following subsection, avoiding such unwieldy notation is one of the many benefits of using two-dimensional numpy arrays. By this point, it should come as no surprise that numpy provides us with the option of avoiding a list of lists in the first place. The way to do this is by starting from a one-dimensional array, that is then re-shaped into a two-dimensional array via the use of the reshape() function:
B = np.array([1,4,9,7,2,8,5,6])
B.shape
B = B.reshape((2,4))
B.shape
B
First, B was a one-dimensional array (which we produced using a regular Python list). This is clearly shown in the value of the shape attribute. We then acted with the reshape() method on our one-dimensional array, thereby creating a two-dimensional array (stored back into the same variable, B). Again, the value of the shape attribute shows that now things are different.
Note that we didn’t even have to start from a Python list. Instead, the original A example we were discussing above can be re-written using arange() which, as you may recall, produces a one-dimensional array:
A = np.arange(11,23)
A = A.reshape((3,4))
A
This A started out as a one-dimensional array and was then re-shaped into a two-dimensional array by passing in the appropriate shape argument. Once you get more used to arrays you will feel comfortable carrying out both steps in one line:
A = np.arange(11,23).reshape(3,4)
A
Observe how conveniently we’ve obviated the multitude of square brackets and commas that where necessary in the definition of LL. (Note that we could have reshaped our array simply by changing the value of the shape tuple, instead of acting with reshape()).
It is now time to see one of the nicer features of using two-dimensional arrays in numpy: this happens to be one of the main selling points of numpy arrays, namely the intuitive way to access elements. Let’s start from our example of a list-of-lists:
LL = [[11,12,13,14], [15,16,17,18], [19,20,21,22]]
As you may recall, if you are interested in the element located in the 2nd row and 1st column, you do:
LL[2][1]
Recall that we are using Python's 0-indexing, so the rows are numbered 0th, 1st, 2nd, and analogously for the columns. (In regular English, the example above is referring to the third row and second column – we’ll use numbers, e.g. 2nd, when employing the 0-indexing convention). This double pair of brackets, separating the numbers from each other is quite cumbersome and also different from how matrix notation is usually done, e.g., $A_{ij}$ or $A_{i, j}$. Thus, it comes as a relief to see that numpy array elements can be indexed simply by using only one pair of square brackets and a comma-separated pair of numbers, for example:
A = np.arange(11,23).reshape(3,4)
A
A[2,1]
As a matter of fact, the cumbersome Python-list notation also works with arrays, but we won’t be employing it in what follows, opting instead for the “cleaner” alternative of two comma-separated values. Similarly, even though we can create a 2d array using a list-of-lists, we will typically opt for a 1d array followed by a call to reshape().
We now turn to slicing. We start from the simplest case possible, namely picking a specific row (by indexing) and then a slice from different columns. For example:
A[1,1:3]
Similarly, if we pick a specific row and use an everything-slice for the columns, we get that entire row:
A[1,:]
Using just the same logic, if we pick a specific column and use an everything-slice for the rows, we get that entire column:
A[:,2]
We could keep piling on examples like this, where we select a pair of rows, a pair of columns, etc.
To summarize, when we use two numbers to index (such as A[2,1]), we go from a two-dimensional array to a number. When we use one number to index/slice (such as A[:,2]), we go from a two-dimensional array to a one-dimensional array. When we use slices (such as A[::2,:]), we get collections of rows, columns, individual elements, etc.
We can combine what we just learned about slicing two-dimensional arrays with what we already know about numpy broadcasting. First, pick a specific subset of our array and assign it to a new variable:
B = A[:2,1:]
B
We can now employ broadcasting of a single value as well as the everything-slice (twice) to set the elements of this new entity to zero:
B[:,:] = 0
B
At this point, we remember that everyhing-slices in numpy (while allowing us to nicely assign to each element, as above) do not create copies. As we saw for one-dimensional arrays, array slices are views on the original array. As a result, by modifying B we have actually also modified A:
A
As before, if this is undesired behavior then you should use numpy.copy():
A = np.arange(11,23).reshape(3,4)
A
B = np.copy(A[:2,1:])
B[:,:] = 0
B
A
As already noted, in chapter 4 we will typically make a a copy of the input array inside a function, instead of modifying it “in place”. After you’re done reading the book this tutorial is accompanying, it will usually be a better idea to avoid copies altogether, as long as you’re careful about what you’re doing.
Just like for one-dimensional arrays, operations for two-dimensional arrays are interpreted elementwise. Such vectorized operations allow us to write very concise code that, e.g., adds together two square matrices:
A = np.arange(1,10).reshape(3,3)
B = np.arange(11,20).reshape(3,3)
A
B
A+B
This is convenient and expected behavior.
On the other hand, a simple multiplication is also carried out elementwise, i.e., each element in A is multiplied by the corresponding element in B:
A*B
This may be unexpected behavior, if you were really looking for a matrix multiplication. To repeat, array operations are always carried out elementwise, so a product of two two-dimensional arrays is simply $(\mathbf{A \odot B})_{ij} = A_{ij} B_{ij}$. (As an aside, in math this is known as the Hadamard product). The operation of matrix multiplication that you know from linear algebra follows from a different formula, namely $(\mathbf{AB})_{ij} = \sum_{k} A_{ik} B_{kj}$. This operation is conveniently encapsulated in numpy’s dot() function:
np.dot(A,B)
This does what we want, but the syntax is not as intuitive as one would have liked. From Python 3.5 and onward you can multiply two matrices using the $@$ infix operator, i.e.
A@B
which is obviously more convenient. As a matter of fact, $@$ or numpy.dot() can also handle one-dimensional arrays. A problem in chapter 1 investigates both solutions further.
Numpy contains several other important functions, with self-explanatory names. The most commonly used are transpose():
np.transpose(A)
which can also be accessed as an array attribute:
A.T
Also useful is trace():
np.trace(A)
As you may have guessed, the latter function sums up the elements on the diagonal.
Based on what we’ve already introduced, we can see how easy it is to iterate over rows:
A = np.arange(1,10).reshape(3,3)
A
for row in A:
print(row)
or columns:
for column in A.T:
print(column)
Note that in both of these cases we don’t have double square brackets, since we are printing out a one-dimensional array each time. Iterating over columns will come in very handy in chapter 4 when we will be handling eigenvectors of a matrix.
In general, two-dimensional arrays behave very similarly to one-dimensional ones. That means that when combining a two dimensional array with a scalar, we have broadcasting, for example:
A/2
Broadcasting is often used in more complex scenarios (e.g., to combine a two-dimensional array with a one-dimensional one), but we won’t employ such features below. Needless to say, all the numpy functions we mentioned earlier, such numpy.sqrt(), numpy.exp(), also work on two-dimensional arrays, and the same holds for subtraction, multiplication, powers, etc. Intriguingly, functions like numpy.sum(), numpy.amin(), and numpy.amax() also take in an (optional) argument axis: if you set axis = 0 you operate along rows and axis = 1 operates along columns. For example:
A = np.arange(1,10).reshape(3,3)
np.sum(A,axis=0)
Similarly, we can create our own user-defined functions that know how to handle an entire two-dimensional array/matrix at once. Given that these features are identical to what we encountered in the previous section, we assume they’re clear to the reader. That being said, there do exist several handy functions that are intuitively easier to grasp for the case of two-dimensional arrays, e.g., numpy.diag() to get the diagonal elements, or numpy.tril() to get the lower triangle of an array and numpy.triu() to get the upper triangle. Finally, another very helpful function is numpy.eye() which looks equivalent to numpy.identity() but, conveniently, also allows us to populate elements on the diagonals above or below the main diagonal; for example, try out numpy.eye(5, k=-2). You should study the documentation to discover further functionality.
A further point: the default storage-format for a numpy array in memory is row-major: as a beginner, you shouldn’t worry about this too much, but it means that rows are stored in order, one after the other. This is the same format used by the C programming language. (If you wish to use, instead, Fortran’s column-major format, presumably in order to interoperate with code in that language, you simply pass in an appropriate keyword argument when creating the numpy array). As long as you structure your code in the “natural” way, i.e., first looping through rows and then through columns, all should be well, so you can ignore this paragraph.
Given the infrastructure that numpy contains with a view to making matrix manipulations easy, it is not surprising that it also contains functions that allow us to conveniently read/write arrays from/to a file. Starting from file input, we turn back to our original example from the Python tutorial:
%%writefile compl.in
x y
0.6 -6.2
1.6 9.3
1.8 16.
If we want to read all the numbers (ignoring the first, header, line of text) into a numpy array, we can simply use the numpy.loadtxt() function, passing in the filename:
A = np.loadtxt("compl.in", skiprows=1)
A
As you can see, with one line of code all the numbers in the file have been stored into a numpy array. Observe how truly simple this is: if there were no header line to be discarded, even the second argument skiprows wouldn’t have been necessary (its default value is 0), so all that we would have needed to pass in would have been the filename!
If you’re thinking that you don’t actually want all your data in one two-dimensional array, you can simply use what you learned about slicing in the previous section and split your array up. If you’d like to do that from the get-go, numpy.loadtxt() has the option unpack which allows you to do just that:
xs, ys = np.loadtxt("compl.in", skiprows=1, unpack=True)
xs
ys
You should compare this to the manual operation we carried out in fileread.py.
The function numpy.loadtxt() has several other options, which allow you to ignore comments, use specific columns, etc. If you need even more options when reading a file (e.g., there may be missing values), you should look up numpy.genfromtxt().
Turning to file output: this is just as straightforward, given the existence of the numpy.savetxt() function, which expects as its first argument (as you may have guessed) the filename where you want the output to be written. Equally straightforwardly, the second expected argument is the numpy array(s) you want written out. To make this tangible, let us rewrite our example filewrite.py from the Python tutorial using numpy:
xs = np.linspace(0,1.8,10)
ys = 4*xs**3 - 7
np.savetxt("compl2.out", (xs,ys))
Note how easily we generated the xs and how straightforward it was to act on each of their elements to generate the ys: this is the power of numpy arrays in action. Then, we simply called numpy.savetxt() passing in the desired filename and a tuple containing the two arrays to be written out.
You may have noticed that the above example writes out the two arrays in turn. If, instead, you wanted the output to be like that of filewrite.py, namely a table of $x$ and $y$ values, you would use numpy.column_stack(), which does what its name says, namely it stacks together two one-dimensional arrays to produce a two-dimensional array:
xs = np.linspace(0,1.8,10)
ys = 4*xs**3 - 7
A = np.column_stack((xs,ys))
np.savetxt("compl3.out", A)
One of the problems asks you to avoid the use of numpy.column_stack() and employ simple array slicing to produce A. As usual, numpy.savetxt() takes in several other optional parameters, which allow you to customize your output.
Rewrite the code sumofints.py, to use numpy arrays instead of core Python functionality.
Rewrite the code in the two file input/output examples from the last section so that they can produce the desired input/output with a minimum of external functions. In other words, first, produce the one-dimensional arrays xs and ys while reading the file as follows:
A = np.loadtxt("compl.in", skiprows=1)
Second, write out to a file the two-dimensional array A as follows:
np.savetxt("compl4.out", A)
without employing numpy.column_stack().
Alex Gezerlis -- http://www.numphyspy.org