To import functionality from a library:
import math
print(math.sqrt)
import math as m # inline rename
print(m.sqrt)
from math import sqrt
print(sqrt)
You can also do:
from math import * # import all (usually not a good idea)
from math import sqrt as square_root # rename inline
You can also have nested packages, these use dots to separate them. For example, the os.path.join
function can be imported directly:
import os.path
join = os.path.join
or
from os.path import join
For the moment, we won't talk about importing your own files.
Warning: nested packages may or may not get imported if you only import the main package - we'll see this later.
</font>
A few conventions:
- Try to have all your imports at the top - easier to quickly read and part the dependencies, and the time spent importing libraries will happen at the beginning.
- Choose the method that's easiest to read, or commonly used (everybody renames numpy to np, for example).
- Don't use import all - that makes it hard to figure out where things come from.
And a caveat:
- Once imported, rerunning an import statement reuses the already imported library. Fast, but means that you can't pick up changes in a file you are editing without restarting (the kernel in Jupyter)!
That's enough about imports for now.
Floating point numbers in IEEE 754
A number in base 10 could be written like this:
-1.234 E -02
\____/ \_/
significand exponent
A floating point number is actually stored as a signed significand and signed exponent in base 2.
Special values for floating point numbers include -0.0
, -inf
, +inf
, and nan
.
There are technically quiet nans and signaling nans in IEEE 754, but I'm not aware of any place in Python where they are distinguished as such - both are called
nan
.
</font>
float("inf") == float("inf")
Common floating point types (IEEE 754 standard):
| Significand | Exponent | Approx digits
float16 | 11 | 5 | 3-4 (half)
float32 | 24 | 8 | 7-8 (single)
float64 | 53 | 11 | 15-16 (double)
print(f"16 bits: {2**16:,}\n32 bits: {2**32:,}\n64 bits: {2**64:,}")
Possibly odd:
2.0 ** 53 # + 1
What value should this produce?
1 - 1 / 3 - 1 / 3 - 1 / 3
There are also some base 2 decimal numbers, like 0.3
, that can't be stored in finite digits in base 2. (Any number with a factor of 5 in the denominator). Python is smart and tries to provide a "nice" representation for you, but you can force it to show you more digits:
print(f"1/2: {.5} {.5:.30f}")
print(f"1/5: {.2} {.2:.30f}")
print(f"1/10: {.1} {.1:.30f}")
print(f"3/30: {.3} {.3:.30f}")
Python does provide conceptually "better" but much slower alternatives for special cases:
from fractions import Fraction
res = 1 - Fraction(1, 3) - Fraction(1, 3) - Fraction(1, 3)
res
from decimal import Decimal
one_third = Decimal(1) / Decimal(3)
1 - one_third - one_third - one_third
Python tries to be as clever as possible in the repr and str of numbers by selecting a nice length to show:
1 / 3
We can get around that several ways:
format(1 / 3, ".17")
Decimal(1 / 3)
Let's take a look at two different forms of the quadratic formula (mathematically equivalent):
$$ x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a} $$and
$$ x_{1,2}^{'} = \frac{-2 c}{b \pm \sqrt{b^2 - 4 a c}}. $$import math
def roots(a, b, c):
sbac = math.sqrt(b ** 2 - 4 * a * c)
x1 = (-b + sbac) / 2 * a
x2 = (-b - sbac) / 2 * a
x1p = -2 * c / (b + sbac)
x2p = -2 * c / (b - sbac)
return x1, x1p, x2, x2p
roots(1, 1, 1e-1)
Let's try the series 1/n. This is a divergent series, but it has an exact solution if we specify the number of terms. The order we sum in does not matter when we do the math! But...
$$ S^\mathrm{up} = \sum_{n=1}^{N} \frac{1}{N} $$$$ S^\mathrm{down} = \sum_{n=N}^{1} \frac{1}{N} $$# You can use np.floatXX to try different types of floats
import numpy as np
sum(1 / x for x in range(1, 1_000_000))
for float_type in [np.float16, np.float32, np.float64]:
print(float_type, sum(float_type(1 / x) for x in reversed(range(1, 1_000_000))))
Aside: If you leave off the []
in a comprehension or use ()
, this is a "generator comprehension" - meaning it only runs as you iterate over it! The unfortunate side effect is it can only be iterated over once. Python, especially Python 3, is full of iterators like these - including reversed()
and range()
, for example. You can use list()
or tuple()
to convert.
This means that the above expression never creates extra memory - it's always doing 1 computation at a time.
import matplotlib.pyplot as plt
import numpy as np
Example: Approximating the sin function
Now let's look at the other main type of error - approximation errors. Let's attempt to plot approximations of the sin function. We'll start by implementing the apsin
function with the formula:
def apsin(x, N):
t = s = x
for n in range(2, N):
t = -t * x ** 2 / ((2 * n - 1) * (2 * n - 2))
s = s + t
return s
# Make a set of x values n (default 100 values)
xs = np.linspace(0, 2 * np.pi)
xs
# Make a new figure
plt.figure(figsize=(12, 5))
# Plot each value
for N in range(2, 7):
plt.plot(xs, apsin(xs, N), label=str(N))
# Plot true values
plt.plot(xs, np.sin(xs), "k--", label="sin(x)")
# Keep the y limits somewhat reasonable
plt.ylim(-1.1, 1.1)
# Show a legend with existing labels
plt.legend()