Search
Profiling Code, Reading and Writing Files

Week 10 Day 1: (Pro)file

Objectives

  • Cover profiling (extension of last lecture)
  • Cover reading and writing of files
  • Start the basics of ODEs

Profiling

Quick extension of last week's lecture: Remember we said that you should optimize only the slow parts of your program. How do you tell? Profile!

There are two types of profilers:

  1. Deterministic profilers: These modify the runtime of the program by tracking every line.
  2. Sampling profilers: These "sample" every so often.

The main profiler for Python is the built in cProfile and profile modules (the first being a faster version of the second). Since Python is already interpreted and slow, most profilers for Python are deterministic profilers.

The following example uses the built in profiler directly, as well as line_profiler, from either conda (conda install line_profiler in environment) or pip (pip install --user line-profiler). Line profiler is a powerful, better interface (and IPython extension) for the cProfile module.

import numpy as np
from scipy.stats import norm
def sampler(
    data,
    samples=4,
    mu_init=0.5,
    proposal_width=0.5,
    plot=False,
    mu_prior_mu=0,
    mu_prior_sd=1.0,
):
    mu_current = mu_init
    posterior = [mu_current]
    for i in range(samples):
        # suggest new position
        mu_proposal = norm(mu_current, proposal_width).rvs()

        # Compute likelihood by multiplying probabilities of each data point
        likelihood_current = norm(mu_current, 1).pdf(data).prod()
        likelihood_proposal = norm(mu_proposal, 1).pdf(data).prod()

        # Compute prior probability of current and proposed mu
        prior_current = norm(mu_prior_mu, mu_prior_sd).pdf(mu_current)
        prior_proposal = norm(mu_prior_mu, mu_prior_sd).pdf(mu_proposal)

        p_current = likelihood_current * prior_current
        p_proposal = likelihood_proposal * prior_proposal

        # Accept proposal?
        p_accept = p_proposal / p_current

        # Usually would include prior probability, which we neglect here for simplicity
        accept = np.random.rand() < p_accept

        if plot:
            plot_proposal(
                mu_current,
                mu_proposal,
                mu_prior_mu,
                mu_prior_sd,
                data,
                accept,
                posterior,
                i,
            )

        if accept:
            # Update position
            mu_current = mu_proposal

        posterior.append(mu_current)

    return posterior
np.random.seed(123)
data = np.random.randn(20)
import cProfile
cProfile.run("posterior = sampler(data, samples=1000, mu_init=1.)")
         7334008 function calls (7282008 primitive calls) in 4.889 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     2000    0.002    0.000    0.027    0.000 <__array_function__ internals>:2(all)
     4000    0.004    0.000    0.061    0.000 <__array_function__ internals>:2(any)
     4000    0.004    0.000    0.042    0.000 <__array_function__ internals>:2(atleast_1d)
     1000    0.001    0.000    0.020    0.000 <__array_function__ internals>:2(broadcast_arrays)
     8000    0.006    0.000    0.168    0.000 <__array_function__ internals>:2(extract)
     8000    0.005    0.000    0.028    0.000 <__array_function__ internals>:2(nonzero)
     4000    0.003    0.000    0.019    0.000 <__array_function__ internals>:2(place)
     4000    0.004    0.000    0.018    0.000 <__array_function__ internals>:2(putmask)
    16000    0.011    0.000    0.066    0.000 <__array_function__ internals>:2(ravel)
     4000    0.005    0.000    0.013    0.000 <__array_function__ internals>:2(shape)
     8000    0.005    0.000    0.038    0.000 <__array_function__ internals>:2(take)
        1    0.016    0.016    4.897    4.897 <ipython-input-2-17c343ae0e58>:1(sampler)
        1    0.000    0.000    4.897    4.897 <string>:1(<module>)
    15000    0.008    0.000    0.013    0.000 <string>:1(__new__)
        1    0.000    0.000    0.000    0.000 <string>:2(<module>)
        2    0.000    0.000    0.000    0.000 <string>:2(_parse_args)
        1    0.000    0.000    0.000    0.000 <string>:5(_parse_args_rvs)
    16000    0.006    0.000    0.031    0.000 _asarray.py:16(asarray)
    24000    0.007    0.000    0.019    0.000 _asarray.py:88(asanyarray)
     4000    0.038    0.000    0.038    0.000 _continuous_distns.py:179(_norm_pdf)
     1000    0.001    0.000    0.005    0.000 _continuous_distns.py:234(_rvs)
     4000    0.003    0.000    0.041    0.000 _continuous_distns.py:237(_pdf)
     5000    0.119    0.000    3.976    0.001 _distn_infrastructure.py:1572(__init__)
     5000    0.007    0.000    0.009    0.000 _distn_infrastructure.py:1637(_updated_ctor_param)
     4000    0.186    0.000    0.732    0.000 _distn_infrastructure.py:1714(pdf)
     5000    0.035    0.000    4.025    0.001 _distn_infrastructure.py:423(__init__)
    15000    0.007    0.000    0.007    0.000 _distn_infrastructure.py:44(instancemethod)
     4000    0.010    0.000    0.742    0.000 _distn_infrastructure.py:442(pdf)
     1000    0.003    0.000    0.088    0.000 _distn_infrastructure.py:460(rvs)
     4000    0.012    0.000    0.253    0.000 _distn_infrastructure.py:520(argsreduce)
     4000    0.031    0.000    0.199    0.000 _distn_infrastructure.py:545(<listcomp>)
     5000    0.016    0.000    0.297    0.000 _distn_infrastructure.py:587(__init__)
     5000    0.057    0.000    1.005    0.000 _distn_infrastructure.py:622(_construct_argparser)
     5000    0.054    0.000    2.509    0.001 _distn_infrastructure.py:702(_construct_doc)
     5000    0.001    0.000    0.001    0.000 _distn_infrastructure.py:710(<genexpr>)
     5000    0.006    0.000    4.031    0.001 _distn_infrastructure.py:753(freeze)
     5000    0.004    0.000    4.036    0.001 _distn_infrastructure.py:770(__call__)
     1000    0.005    0.000    0.028    0.000 _distn_infrastructure.py:790(_argcheck_rvs)
     2000    0.001    0.000    0.001    0.000 _distn_infrastructure.py:804(squeeze_left)
     1000    0.001    0.000    0.002    0.000 _distn_infrastructure.py:820(<listcomp>)
     1000    0.000    0.000    0.000    0.000 _distn_infrastructure.py:849(<listcomp>)
    10000    0.003    0.000    0.003    0.000 _distn_infrastructure.py:864(_argcheck)
     9000    0.003    0.000    0.003    0.000 _distn_infrastructure.py:876(_get_support)
     4000    0.018    0.000    0.019    0.000 _distn_infrastructure.py:897(_support_mask)
     1000    0.022    0.000    0.084    0.000 _distn_infrastructure.py:935(rvs)
     2000    0.001    0.000    0.011    0.000 _methods.py:40(_prod)
     2000    0.001    0.000    0.013    0.000 _methods.py:44(_any)
     2000    0.001    0.000    0.009    0.000 _methods.py:47(_all)
     5000    0.004    0.000    0.004    0.000 _util.py:174(check_random_state)
    15000    0.078    0.000    0.707    0.000 _util.py:277(getargspec_no_self)
    15000    0.012    0.000    0.014    0.000 _util.py:300(<listcomp>)
    15000    0.007    0.000    0.008    0.000 _util.py:304(<listcomp>)
    15000    0.006    0.000    0.007    0.000 _util.py:309(<listcomp>)
    15000    0.009    0.000    0.012    0.000 _util.py:314(<listcomp>)
    10000    0.815    0.000    2.253    0.000 doccer.py:12(docformat)
    10000    0.280    0.000    0.505    0.000 doccer.py:179(indentcount_lines)
    25000    0.019    0.000    0.026    0.000 enum.py:284(__call__)
    25000    0.007    0.000    0.007    0.000 enum.py:526(__new__)
    16000    0.002    0.000    0.002    0.000 fromnumeric.py:1689(_ravel_dispatcher)
    16000    0.016    0.000    0.045    0.000 fromnumeric.py:1693(ravel)
     8000    0.001    0.000    0.001    0.000 fromnumeric.py:1800(_nonzero_dispatcher)
     8000    0.004    0.000    0.019    0.000 fromnumeric.py:1804(nonzero)
     4000    0.001    0.000    0.001    0.000 fromnumeric.py:1899(_shape_dispatcher)
     4000    0.002    0.000    0.002    0.000 fromnumeric.py:1903(shape)
     4000    0.001    0.000    0.001    0.000 fromnumeric.py:2232(_any_dispatcher)
     4000    0.006    0.000    0.053    0.000 fromnumeric.py:2236(any)
     2000    0.000    0.000    0.000    0.000 fromnumeric.py:2320(_all_dispatcher)
     2000    0.002    0.000    0.022    0.000 fromnumeric.py:2324(all)
    16000    0.010    0.000    0.034    0.000 fromnumeric.py:55(_wrapfunc)
     6000    0.017    0.000    0.066    0.000 fromnumeric.py:73(_wrapreduction)
     6000    0.003    0.000    0.003    0.000 fromnumeric.py:74(<dictcomp>)
     8000    0.001    0.000    0.001    0.000 fromnumeric.py:93(_take_dispatcher)
     8000    0.007    0.000    0.027    0.000 fromnumeric.py:97(take)
     8000    0.002    0.000    0.002    0.000 function_base.py:1624(_extract_dispatcher)
     8000    0.022    0.000    0.154    0.000 function_base.py:1628(extract)
     4000    0.001    0.000    0.001    0.000 function_base.py:1680(_place_dispatcher)
     4000    0.003    0.000    0.012    0.000 function_base.py:1684(place)
    20000    0.044    0.000    0.046    0.000 function_base.py:2031(__init__)
    30000    0.009    0.000    0.012    0.000 inspect.py:158(isfunction)
    15000    0.046    0.000    0.136    0.000 inspect.py:1799(_signature_bound_method)
    15000    0.098    0.000    0.251    0.000 inspect.py:2117(_signature_from_function)
30000/15000    0.090    0.000    0.531    0.000 inspect.py:2198(_signature_from_callable)
    25000    0.043    0.000    0.076    0.000 inspect.py:2467(__init__)
    45000    0.006    0.000    0.006    0.000 inspect.py:2517(name)
    20000    0.002    0.000    0.002    0.000 inspect.py:2521(default)
    80000    0.009    0.000    0.009    0.000 inspect.py:2529(kind)
    30000    0.102    0.000    0.124    0.000 inspect.py:2750(__init__)
    40000    0.014    0.000    0.018    0.000 inspect.py:2799(<genexpr>)
    15000    0.019    0.000    0.550    0.000 inspect.py:2829(from_callable)
    75000    0.008    0.000    0.008    0.000 inspect.py:2835(parameters)
    15000    0.016    0.000    0.079    0.000 inspect.py:2843(replace)
    15000    0.010    0.000    0.560    0.000 inspect.py:3081(signature)
    15000    0.022    0.000    0.036    0.000 inspect.py:484(unwrap)
    15000    0.005    0.000    0.008    0.000 inspect.py:504(_is_wrapper)
     4000    0.001    0.000    0.001    0.000 multiarray.py:1078(putmask)
     8000    0.051    0.000    0.068    0.000 numerictypes.py:578(_can_coerce_all)
    56000    0.012    0.000    0.012    0.000 numerictypes.py:587(<listcomp>)
     4000    0.010    0.000    0.084    0.000 numerictypes.py:602(find_common_type)
     4000    0.005    0.000    0.005    0.000 numerictypes.py:654(<listcomp>)
     4000    0.001    0.000    0.001    0.000 numerictypes.py:655(<listcomp>)
     4000    0.001    0.000    0.001    0.000 shape_base.py:21(_atleast_1d_dispatcher)
     4000    0.014    0.000    0.034    0.000 shape_base.py:25(atleast_1d)
     1000    0.004    0.000    0.004    0.000 stride_tricks.py:185(_broadcast_shape)
     1000    0.000    0.000    0.000    0.000 stride_tricks.py:202(_broadcast_arrays_dispatcher)
     1000    0.004    0.000    0.016    0.000 stride_tricks.py:206(broadcast_arrays)
     1000    0.001    0.000    0.006    0.000 stride_tricks.py:262(<listcomp>)
     3000    0.001    0.000    0.001    0.000 stride_tricks.py:266(<genexpr>)
    15000    0.006    0.000    0.006    0.000 {built-in method __new__ of type object at 0x55e26f98c240}
     2000    0.001    0.000    0.002    0.000 {built-in method builtins.all}
    30000    0.004    0.000    0.004    0.000 {built-in method builtins.callable}
   5001/1    0.500    0.000    4.897    4.897 {built-in method builtins.exec}
    20000    0.005    0.000    0.005    0.000 {built-in method builtins.getattr}
    20000    0.004    0.000    0.004    0.000 {built-in method builtins.hasattr}
    15000    0.003    0.000    0.003    0.000 {built-in method builtins.id}
   144000    0.020    0.000    0.020    0.000 {built-in method builtins.isinstance}
  1045000    0.071    0.000    0.071    0.000 {built-in method builtins.len}
   480000    0.091    0.000    0.091    0.000 {built-in method builtins.min}
    15000    0.004    0.000    0.004    0.000 {built-in method builtins.setattr}
    42000    0.041    0.000    0.041    0.000 {built-in method numpy.array}
     4000    0.008    0.000    0.008    0.000 {built-in method numpy.core._multiarray_umath._insert}
63000/31000    0.057    0.000    0.333    0.000 {built-in method numpy.core._multiarray_umath.implement_array_function}
     4000    0.004    0.000    0.004    0.000 {built-in method numpy.zeros}
    15000    0.003    0.000    0.003    0.000 {built-in method sys.getrecursionlimit}
     2000    0.003    0.000    0.012    0.000 {method 'all' of 'numpy.generic' objects}
     2000    0.005    0.000    0.018    0.000 {method 'any' of 'numpy.generic' objects}
  2464000    0.177    0.000    0.177    0.000 {method 'append' of 'list' objects}
    11000    0.006    0.000    0.006    0.000 {method 'copy' of 'dict' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
   320000    0.356    0.000    0.356    0.000 {method 'expandtabs' of 'str' objects}
    46000    0.008    0.000    0.008    0.000 {method 'get' of 'dict' objects}
    25000    0.005    0.000    0.005    0.000 {method 'isidentifier' of 'str' objects}
    16000    0.003    0.000    0.003    0.000 {method 'items' of 'dict' objects}
   275000    0.089    0.000    0.090    0.000 {method 'join' of 'str' objects}
   655000    0.072    0.000    0.072    0.000 {method 'lstrip' of 'str' objects}
     8000    0.007    0.000    0.007    0.000 {method 'nonzero' of 'numpy.ndarray' objects}
     3000    0.001    0.000    0.001    0.000 {method 'pop' of 'dict' objects}
     2000    0.002    0.000    0.013    0.000 {method 'prod' of 'numpy.ndarray' objects}
     1000    0.003    0.000    0.003    0.000 {method 'rand' of 'numpy.random.mtrand.RandomState' objects}
    16000    0.012    0.000    0.012    0.000 {method 'ravel' of 'numpy.ndarray' objects}
     8000    0.043    0.000    0.043    0.000 {method 'reduce' of 'numpy.ufunc' objects}
    30000    0.191    0.000    0.191    0.000 {method 'replace' of 'str' objects}
     6000    0.012    0.000    0.012    0.000 {method 'reshape' of 'numpy.ndarray' objects}
   320000    0.320    0.000    0.320    0.000 {method 'splitlines' of 'str' objects}
     1000    0.004    0.000    0.004    0.000 {method 'standard_normal' of 'numpy.random.mtrand.RandomState' objects}
     8000    0.014    0.000    0.014    0.000 {method 'take' of 'numpy.ndarray' objects}
     1000    0.001    0.000    0.001    0.000 {method 'update' of 'dict' objects}
    75000    0.014    0.000    0.014    0.000 {method 'values' of 'mappingproxy' objects}


Reading that last page is a bit tricky - we have the "insides" of all the Python functions being called. Let's try line_profiler to just select the one function we care about, and see that line-by-line.

Like all external packages except matplotlib, we need to load the package as an IPython extension first:

%load_ext line_profiler

Then, we can use the lprun magic along with the -f function to select a function:

%lprun -f sampler posterior = sampler(data, samples=1000, mu_init=1.)

Much better! I can easily see what's wrong now. This allows you to target optimizations; often minor changes, like np.stack instead of np.array, may give you large speedups. Parts that are not performance critical can remain clear and easy to read/maintain.

Real life example:

  • I had 128 DataFrames of around a million events each, and I wanted to make 128 2D histograms.
  • Custom Histogram library Physt was taking 3 seconds per histogram.
  • Native Numpy histogram (then feeding that into Physt) took .9 seconds per histogram.
  • Custom Numba histogram function (then feeding that into Physt) took 0.1 seconds per histogram.

At this point, a total of 10-15 seconds for the procedure was fine. Physt histograms as output means I can use the nice OO design to merge and rebin histograms later, by the way.

Files

Let's look at reading in and out data!

Basic input/output

As a reminder, this is input output of text or data files in pure Python:

with open("tmpfile.txt", "w") as f:
    f.write("This is one line\n")
    print("Second line", file=f)
# File gets closed here

with open("tmpfile.txt") as f:
    for l in f:
        print(l, end="")
This is one line
Second line

You can iterate over a file to get lines, or use .readlines to get a list of lines, or .read to just read in (part) of the file. The mode flags can set text vs. binary 'b', read 'r', write 'w', or append 'a'.

The f object is a File Buffer. Quite a few functions and libraries in Python take a buffer - usually they take a file name or an open buffer. You can make a buffer that is attached to your memory instead of a file with io.StringIO.

Serializing any objects

Python has a protocol for storing and recovering almost any object called "pickling". This is general purpose, but is often not ideal for large or long term storage, or for communicating data - there is some limitation on Python version, the class code should be similar or the same, and there is no compression. But, it can be handy in a pinch. You can use dump and load, or you can use dumps and loads to input/output directly to a string instead:

import pickle
pickle_str = pickle.dumps("some object")
print(pickle_str)
b'\x80\x03X\x0b\x00\x00\x00some objectq\x00.'
pickle.loads(pickle_str)
'some object'

Use case: configuration

Let's look at better ways to do IO for specific cases. Let's look at a common one: storing some "configuration" like data (or any small set of data). As long as the data is small, it's nice to leave it human readable format, in a text file. Here are some popular formats:

  1. XML: Famous - HTML is a subset of XML. Very verbose and ugly, and often used with a custom schema.
  2. JSON: Popular. A little unfriendly for writing, but clean and simple. A subset of JavaScript makes it web-friendly. Great libraries available.
  3. INI: Not well defined, and very limited. Very easy for a human to author, though.
  4. YAML: Yet Another Markup Language, popular in some areas. Lots of weird corner cases to the syntax, though.
  5. TOML: Simpler version of the thing above. Gaining some use in Python as of late.

Of these, the best standard library support in Python goes to JSON, so we'll look at that one. It's the best in the list for (small amounts of) general data. For configuration, it's a tossup.

JSON has the same methods as pickle.

import json
structure_to_save = ["list", "items", {"dictionary": "Nested", "number": 2}]

print(json.dumps(structure_to_save, indent=True))
[
 "list",
 "items",
 {
  "dictionary": "Nested",
  "number": 2
 }
]

Larger data storage

Numpy

Let's assume we have data in Numpy. We can use one of several Numpy methods to store the data; we can even compress it.

a = np.random.rand(1000)

The format is a simple binary one-array-per-file format; you can use savez to save several files into one uncompressed zip file, or savez_compressed to save to a compressed zip file:

np.savez_compressed("tmp.npz", a=a)
with np.load("tmp.npz") as f:
    print(f["a"].shape)
(1000,)

While simple, this is still Numpy specific, and is not ideal for cross-language data storage.

HDF5

This is a standard format for storing binary structured data with optional compression. It has several nice features, like groups and attributes, that make it very powerful. You need an external library, but Anaconda comes with it by default. Technically, it comes with two. H5py and PyTables; I use HDF5 in C++ also, so I like the fact that H5py looks more like other HDF5 libraries, so I'll cover that. The other one is available with import tables.

import h5py
with h5py.File("tmp.h5", "w") as f:
    f["a"] = a

with h5py.File("tmp.h5") as f:
    b = f["a"].value
/usr/share/miniconda/envs/compclass/lib/python3.7/site-packages/ipykernel_launcher.py:4: H5pyDeprecationWarning: The default file mode will change to 'r' (read-only) in h5py 3.0. To suppress this warning, pass the mode you need to h5py.File(), or set the global default h5.get_config().default_file_mode, or set the environment variable H5PY_DEFAULT_READONLY=1. Available modes are: 'r', 'r+', 'w', 'w-'/'x', 'a'. See the docs for details.
  after removing the cwd from sys.path.
/usr/share/miniconda/envs/compclass/lib/python3.7/site-packages/ipykernel_launcher.py:5: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
  """
assert np.all(a == b)

This is a common idiom in modern IO libraries; you treat the file like a dictionary. There are explicit methods if you need to do something fancier (like turn on compression for a dataset). HDF5 has tools to allow you to keep a dataset on file instead of reading it entirely to memory (but I've not had to use those).

Also, there are great non-Python tools for HDF5 available, like a graphical file viewer.

Note for HEP physicists: the ROOT file format has a few features that HDF5 does not, like incremental saving. However, other than small differences, the formats are similar. Except that HDF5 is available everywhere and even if it isn't, it takes something like 5 minutes to build instead of several hours...

</font>

Pandas

Pandas has very powerful connectors to lots of different formats, including CSV, JSON, Excel, and HDF5. If your data is already a table, just use Pandas!

import pandas as pd
dates = pd.date_range("20130101", periods=6)
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list("ABCD"))
df
A B C D
2013-01-01 -0.455285 0.130549 -2.285845 2.360208
2013-01-02 0.259258 0.256923 0.780788 0.561676
2013-01-03 1.437854 1.700878 -0.816026 -1.941486
2013-01-04 -1.696014 2.202473 -1.077163 1.505113
2013-01-05 -1.066637 1.296895 1.286712 0.233670
2013-01-06 -0.637027 1.150314 -0.552488 1.561838
df.to_hdf("tmp.h5", "df")
pd.read_hdf("tmp.h5", "df")
A B C D
2013-01-01 -0.455285 0.130549 -2.285845 2.360208
2013-01-02 0.259258 0.256923 0.780788 0.561676
2013-01-03 1.437854 1.700878 -0.816026 -1.941486
2013-01-04 -1.696014 2.202473 -1.077163 1.505113
2013-01-05 -1.066637 1.296895 1.286712 0.233670
2013-01-06 -0.637027 1.150314 -0.552488 1.561838

Bonus: Uproot

In HEP, we have a very commonly used and powerful format: ROOT. But getting ROOT to compile is a long (2+ hour) ordeal, and I have yet to get it to work with Anaconda Python on macOS.

An exciting new library, "uproot" lets you read ROOT files in pure Python without ROOT. The original name was to be micro-root, or $\mu$root, but it became uproot. You've already seen a bit of uproot in the previous lectures.

Writing files is planned, but still in progress.