Updated On : Nov-08,2020 Time Investment : ~20 mins

yappi - Yet Another Python Profiler

The yappi stands for yet another python profiler. The yappi is a Python profiler that is designed in C and supports the profiling of multi-threaded, asyncio, and gevent code. As it’s designed in C, it runs quite fast compared to other profilers which are designed in pure python. The yappi provides a flexible API which lets us profile particular parts of code by starting and stopping profiling. The yappi returns profiling results as function-based and not line by line based like line_profiler and few other profilers which are line by line based. The yappi also lets us format profiling output in different formats like callgrind and pstat. It even lets us decide whether we want to profile wall time (total time for which code ran from the first statement to the last statement) or CPU time (time for which code actually used CPU) taken by code. The yappi stores profiling output as a special stat object which lets us filter profiling results as well as sort results. As a part of this tutorial, we'll be explaining with simple examples, how we can profile our python code using yappi. We'll also try to explain the majority of yappi API in the process.

We'll be explaining the usage of yappi in two ways:

  • Profiling From Command Prompt/Shell.
  • Profiling Particular Part of Code by using the yappi library in Python Code.

Profiling From Command Prompt/Shell

As a part of this section, we'll explain how we can profile script from the command line using yappi. We'll also explain various options available when profiling code from the command line.

Example 1: Simple Profiling

As a part of our first example, we'll explain how we can profile a simple script using yappi. Below we have designed a simple python script which loops 10000 times, calls the add function passing it index twice, adding the result of the add function to a list, and returning the list. We have then profiled this script using yappi.

def add(a, b):
    return a+b


def get_sum_of_list():
    final_list = []
    for i in range(10000):
        out = add(i, i)
        final_list.append(out)
    return final_list


if __name__ == "__main__":
    l = get_sum_of_list()
    #print(l)

Please make a note that we have called yappi from the jupyter notebook cell and the profiling script is kept in the profiling_examples folder.

We can see from the below output that yappi has reported profiling for each function calls. By default, yappi measures CPU time taken by functions. The table represented is sorted based on ttot time and in descending order.

The function call table has a list of columns mentioned below:

  • name - It has information on the function name which was executed.
  • ncall - It holds an integer representing the number of times the function was called. This sometimes returns two numbers separated by a slash. In that case on number represents an actual number of time function called excluding recursive calls and another represents the total time function called including recursive calls.
  • tsub - It represents time spent in function calls excluding sub-function calls.
  • ttot - It represents the total time taken to execute a function.
  • tavg - It represents the average time spent per call of that function.

The thread call table has a list of columns mentioned below:

  • name - It represents the class name of the thread which was given when creating thread using threading.Thread.
  • id - It’s a unique integer id given by yappi.
  • tid - It’s an integer representing the real OS thread id.
  • ttot - It represents the total time taken by the thread.
  • scnt - It’s an integer representing a number of times the thread was scheduled. This count can be more than one if the thread has gone to sleep and the CPU is given to another thread in case of thread switching.
!yappi profiling_examples/example_1.py
Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..g_examples/example_1.py:1 <module>  1      0.000003  0.025356  0.025356
..les/example_1.py:5 get_sum_of_list  1      0.013539  0.025352  0.025352
..filing_examples/example_1.py:1 add  10000  0.005976  0.005976  0.000001
..tlocale.py:33 getpreferredencoding  1      0.000004  0.000006  0.000006
..s.py:319 IncrementalDecoder.decode  1      0.000004  0.000005  0.000005
..py:309 IncrementalDecoder.__init__  1      0.000004  0.000005  0.000005
..py:260 IncrementalDecoder.__init__  1      0.000001  0.000001  0.000001

name           id     tid              ttot      scnt
_MainThread    0      140235488175936  0.026799  1

Example 2: Changing Profiling Output Format

The yappi lets us profile results in different formats using -f or --output-format=output_format option. We need to provide format after -f options which can be one of the pstat, callgrind, and ystat. Below we have called our same profiling example and formatted results as pstat.

!yappi -f pstat profiling_examples/example_1.py
Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..g_examples/example_1.py:1 <module>  1      0.000003  0.024965  0.024965
..les/example_1.py:5 get_sum_of_list  1      0.013389  0.024961  0.024961
..filing_examples/example_1.py:1 add  10000  0.005848  0.005848  0.000001
..tlocale.py:33 getpreferredencoding  1      0.000004  0.000006  0.000006
..s.py:319 IncrementalDecoder.decode  1      0.000003  0.000005  0.000005
..py:309 IncrementalDecoder.__init__  1      0.000004  0.000004  0.000004
..py:260 IncrementalDecoder.__init__  1      0.000001  0.000001  0.000001

name           id     tid              ttot      scnt
_MainThread    0      140358719629120  0.026323  1

Example 3: Saving Profiling Output to a File

The yappi also lets us save the results of profiling to an output file. It provides option named -o or --output-file=output_file for this purpose. We have below stored the result of profiling our example script to an output file named yappi_ex1.out as callgrind format. We have then displayed file contents as well.

!yappi -f callgrind -o yappi_ex1.out profiling_examples/example_1.py
!cat yappi_ex1.out
version: 1
creator: yappi
pid: 6097
cmd:  profiling_examples/example_1.py
part: 1

events: Ticks

fl=(11) profiling_examples/example_1.py
fl=(12) profiling_examples/example_1.py
fl=(13) profiling_examples/example_1.py
fl=(1) /home/sunny/anaconda3/lib/python3.7/_bootlocale.py
fl=(6) /home/sunny/anaconda3/lib/python3.7/codecs.py
fl=(3) /home/sunny/anaconda3/lib/python3.7/codecs.py
fl=(4) /home/sunny/anaconda3/lib/python3.7/codecs.py

fn=(11) <module> profiling_examples/example_1.py:1
fn=(12) get_sum_of_list profiling_examples/example_1.py:5
fn=(13) add profiling_examples/example_1.py:1
fn=(1) getpreferredencoding /home/sunny/anaconda3/lib/python3.7/_bootlocale.py:33
fn=(6) IncrementalDecoder.decode /home/sunny/anaconda3/lib/python3.7/codecs.py:319
fn=(3) IncrementalDecoder.__init__ /home/sunny/anaconda3/lib/python3.7/codecs.py:309
fn=(4) IncrementalDecoder.__init__ /home/sunny/anaconda3/lib/python3.7/codecs.py:260

fl=(11)
fn=(11)
1 3
cfl=(12)
cfn=(12)
calls=1 0
0 25249

fl=(12)
fn=(12)
5 13554
cfl=(13)
cfn=(13)
calls=10000 0
0 5918

fl=(13)
fn=(13)
1 5918

fl=(1)
fn=(1)
33 3

fl=(6)
fn=(6)
319 3

fl=(3)
fn=(3)
309 3
cfl=(4)
cfn=(4)
calls=1 0
0 1

fl=(4)
fn=(4)
260 1

Example 4: Profiling Built-In Functions

The yappi provides us with the option of whether we want to see the time taken by inbuilt python functions or not. We can instruct yappi to include time taken by inbuilt python functions by using the -b or --builtins option. Below we have instructed our yappi to include built-in function times when profiling our script. We can see that it has included the time of functions like append() list function and few other built-in function calls which were not present in previous profiling results.

!yappi -b profiling_examples/example_1.py
Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
builtins.exec                         1      0.000003  0.026129  0.026129
..g_examples/example_1.py:1 <module>  1      0.000005  0.026126  0.026126
..les/example_1.py:5 get_sum_of_list  1      0.014146  0.026120  0.026120
..filing_examples/example_1.py:1 add  10000  0.006088  0.006088  0.000001
..method 'append' of 'list' objects>  10000  0.005886  0.005886  0.000001
builtins.compile                      1      0.001285  0.001285  0.001285
io.open                               1      0.000028  0.000038  0.000038
..d' of '_io.TextIOWrapper' objects>  1      0.000010  0.000015  0.000015
..tlocale.py:33 getpreferredencoding  1      0.000004  0.000006  0.000006
..s.py:319 IncrementalDecoder.decode  1      0.000004  0.000005  0.000005
..py:309 IncrementalDecoder.__init__  1      0.000004  0.000005  0.000005
sys._getframe                         2      0.000002  0.000002  0.000001
_locale.nl_langinfo                   1      0.000002  0.000002  0.000002
_codecs.utf_8_decode                  1      0.000002  0.000002  0.000002
..py:260 IncrementalDecoder.__init__  1      0.000001  0.000001  0.000001

name           id     tid              ttot      scnt
_MainThread    0      140259582064448  0.027512  1

Example 5: Report Profiling Time as Wall Time or CPU Clock Time

The yappi lets us choose whether we want it to report wall time or CPU time for the profiling results. The wall time is the total time taken by code which includes waiting for CPU, waiting for inputs, waiting for results from networks, etc. The wall time is generally total time including all waits and actual CPU usage. The CPU time is the time taken by code when it actually used CPU excluding the time spent waiting for CPU. The yappi lets us specify whether we want to profile wall time or CPU time using -c or --clock-type=clock_type options. Below we have profiled our script using both ways.

!yappi -c wall profiling_examples/example_1.py
Clock type: WALL
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..g_examples/example_1.py:1 <module>  1      0.000002  0.007726  0.007726
..les/example_1.py:5 get_sum_of_list  1      0.004804  0.007724  0.007724
..filing_examples/example_1.py:1 add  10000  0.001592  0.001592  0.000000
..tlocale.py:33 getpreferredencoding  1      0.000003  0.000005  0.000005
..s.py:319 IncrementalDecoder.decode  1      0.000002  0.000004  0.000004
..py:309 IncrementalDecoder.__init__  1      0.000003  0.000003  0.000003
..py:260 IncrementalDecoder.__init__  1      0.000000  0.000000  0.000000

name           id     tid              ttot      scnt
_MainThread    0      140521224836928  0.009458  1
!yappi -c cpu profiling_examples/example_1.py
Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..g_examples/example_1.py:1 <module>  1      0.000005  0.027066  0.027066
..les/example_1.py:5 get_sum_of_list  1      0.014626  0.027061  0.027061
..filing_examples/example_1.py:1 add  10000  0.006248  0.006248  0.000001
..tlocale.py:33 getpreferredencoding  1      0.000004  0.000006  0.000006
..s.py:319 IncrementalDecoder.decode  1      0.000004  0.000005  0.000005
..py:309 IncrementalDecoder.__init__  1      0.000004  0.000005  0.000005
..py:260 IncrementalDecoder.__init__  1      0.000001  0.000001  0.000001

name           id     tid              ttot      scnt
_MainThread    0      140113281701696  0.032363  1

Example 6: Profiling Only Thread That Calls start()

The yappi lets us profile results for multi-threaded code as well. It lets us decide whether we want it to report the time spent on each thread or the main thread which started other thread. It lets us do this using -s or --single-thread options.

To explain this example, below we have created a simple script that uses multi-threading. We have created a function that generates a random number between 1-100 and has introduced artificial wait time in the function so that it takes 2 seconds for it. We have then created the ThreadPoolExecutor of 3 threads and called the random number generator function three times in different threads. We have also printed the output of the calls.

If you are interested in learning about the concurrent.futures python module then please feel free to check our tutorial on the same.

import time
import random
from concurrent.futures import ThreadPoolExecutor


def slow_random_gen():
    time.sleep(2)
    val = random.randint(1,100)
    return val

def main_func():
    ppe = ThreadPoolExecutor(max_workers=3)
    futures = []

    futures.append(ppe.submit(slow_random_gen))
    futures.append(ppe.submit(slow_random_gen))
    futures.append(ppe.submit(slow_random_gen))

    print([future.result() for future in futures])

if __name__ == '__main__':
    main_func()

Below we have profiled our script written above in a simple way without using any option. We can notice that by default, it has accounted for all threads (main thread and 3 threads created by thread pool executor).

In the next cell, we have profiled script using -s option which only reports the main thread which had created other threads, and all other threads created by thread pool executor is not reported.

Please make a note that in the below example, we have a shortened the output of function calls as it was quite long due to the usage of different libraries. Please don't be surprised if you see long function calls output when you profile this code using yappi.

!yappi profiling_examples/multi_threaded_example.py

[15, 69, 69]

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..lti_threaded_example.py:1 <module>  1      0.000034  0.014990  0.014990
..lib._bootstrap>:978 _find_and_load  20/3   0.000162  0.013865  0.000693
..strap>:948 _find_and_load_unlocked  20/3   0.000089  0.013721  0.000686
..lib._bootstrap>:663 _load_unlocked  20/4   0.000149  0.012675  0.000634
..>:722 SourceFileLoader.exec_module  11/4   0.000044  0.012373  0.001125
..rap>:211 _call_with_frames_removed  30/4   0.000052  0.012340  0.000411
...

name           id     tid              ttot      scnt
_MainThread    0      139947519743808  0.016440  6
Thread         2      139947415697152  0.000374  2
Thread         1      139947491821312  0.000356  2
Thread         3      139947407304448  0.000246  2
!yappi -s profiling_examples/multi_threaded_example.py
[84, 59, 18]

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
..lti_threaded_example.py:1 <module>  1      0.000041  0.014064  0.014064
..lib._bootstrap>:978 _find_and_load  20/3   0.000148  0.012756  0.000638
..strap>:948 _find_and_load_unlocked  20/3   0.000080  0.012646  0.000632
..lib._bootstrap>:663 _load_unlocked  20/4   0.000142  0.011733  0.000587
..>:722 SourceFileLoader.exec_module  11/4   0.000040  0.011459  0.001042
..rap>:211 _call_with_frames_removed  30/4   0.000047  0.011400  0.000380
..ent/futures/__init__.py:4 <module>  1      0.000006  0.005445  0.005445
..urrent/futures/_base.py:4 <module>  1      0.000027  0.005024  0.005024
..ib/python3.7/random.py:38 <module>  1      0.000123  0.004458  0.004458
...7/logging/__init__.py:24 <module>  1      0.000098  0.004047  0.004047
...

name           id     tid              ttot      scnt
_MainThread    0      139785483880256  0.015532  1

Profiling In A Code

As apart of this section, we'll explain how we can use yappi as a python library and profile a particular part of the python code. We'll explain various API functions of yappi as a part of this section.

Example 1: Profiling Using run() Function

The simplest way to profile code using yappi is using the run() function as a context manager for code that we want to profile.

  • run() - This function can be used as a context manager for profiling code using yappi. It has three parameters mentioned below.
    • builtins - It’s a boolean representing whether you want to profile builtins or not. The default is False.
    • profile_threads - It’s a boolean representing whether you want to profile all threads or just the main thread. The default is True means profiles all threads.
    • profile_greenlets - It’s a boolean representing whether to profile greenlets or not.
  • yappi.get_func_stats() - It returns YFuncStats object which has information about each function calls and its profiling data.
  • yappi.get_thread_stats() - It returns YThreadStats object which has profiling information about threads.
  • YFuncStats.print_all() / YThreadStats.print_all() - This method prints profiling results to standard output. It has a parameter named out which is set to sys.stdout hence it prints output to standard output. We can output profiling results to a file as well.

Below we have explained the usage of all the above-mentioned method with a simple example. We are simply looping 10000 times generating two random numbers each time, adding both, and then adding results of that addition to a python list. We are returning that python list at the end. We have called our main function into yappi.run() context manager so that it gets profiled along with all functions that it calls. We have also printed a few constants available with yappi which stores few default options of the library.

CODE

import random
import yappi

def add(a, b):
    return a+b


def get_sum_of_list():
    final_list = []
    for i in range(10000):
        rand1 = random.randint(1,100)
        rand2 = random.randint(1,100)
        out = add(rand1, rand2)
        final_list.append(out)
    return final_list


with yappi.run(builtins=True):
    l = get_sum_of_list()
    #    print(l)

print("================ Func Stats ===================")

yappi.get_func_stats().print_all()

print("\n================ Thread Stats ===================")

yappi.get_thread_stats().print_all()


print("\nYappi Backend Types   : ",yappi.BACKEND_TYPES)
print("Yappi Clock Types     : ", yappi.CLOCK_TYPES)
print("Yappi Stats Columns   : ", yappi.COLUMNS_FUNCSTATS)
print("Yappi Line Sep        : ", yappi.LINESEP)

OUTPUT

================ Func Stats ===================

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
yappi_ex1.py:8 get_sum_of_list        1      0.029480  0.167297  0.167297
..on3.7/random.py:218 Random.randint  20000  0.025818  0.125842  0.000006
..3.7/random.py:174 Random.randrange  20000  0.029624  0.100024  0.000005
...7/random.py:224 Random._randbelow  20000  0.043398  0.070399  0.000004
..bits' of '_random.Random' objects>  25450  0.015492  0.015492  0.000001
..hod 'bit_length' of 'int' objects>  20000  0.011510  0.011510  0.000001
yappi_ex1.py:4 add                    10000  0.006068  0.006068  0.000001
..method 'append' of 'list' objects>  10000  0.005907  0.005907  0.000001
.. _GeneratorContextManager.__exit__  1      0.000000  0.000000  0.000000
builtins.next                         1      0.000000  0.000000  0.000000

================ Thread Stats ===================

name           id     tid              ttot      scnt
_MainThread    0      139678150301504  0.167317  1

Yappi Backend Types   :  {'NATIVE_THREAD': 0, 'GREENLET': 1}
Yappi Clock Types     :  {'WALL': 0, 'CPU': 1}
Yappi Stats Columns   :  ['name', 'ncall', 'ttot', 'tsub', 'tavg']
Yappi Line Sep        :

Example 2: Sorting Profiling Based on Stats Columns

As a part of this example, we'll explain how we can change between wall and CPU time profiling as well as how we can sort the results of profiling.

  • yappi.set_clock_type() - This function accepts a string specifying whether we want to profile wall time or CPU time.
  • get_clock_type() - It returns clock type used by profiler.
  • get_mem_usage() - It returns memory usage by yappi profiler.
  • YFuncStats.sort() - This method lets us sort results in ascending or descending order based on different columns of profiling like total time, average time, number of calls, etc.

Below we have explained the usage of the above-mentioned methods where we sort the output based on a number of calls in ascending order.

CODE

import random
import yappi

def add(a, b):
    return a+b


def get_sum_of_list():
    final_list = []
    for i in range(10000):
        rand1 = random.randint(1,100)
        rand2 = random.randint(1,100)
        out = add(rand1, rand2)
        final_list.append(out)
    return final_list


yappi.set_clock_type("WALL")

with yappi.run(builtins=True):
    l = get_sum_of_list()
    #    print(l)

print("Default Sort Type   : ", yappi.DEFAULT_SORT_TYPE)
print("Default Sort Order  : ", yappi.DEFAULT_SORT_ORDER)
print("Default Clock Type  : ", yappi.get_clock_type())
print("Yappi Memoery Usage : ", yappi.get_mem_usage())

print("\n================ Func Stats ===================")

stats = yappi.get_func_stats()
sorted_stats = stats.sort("ncall", sort_order="asc")
sorted_stats.print_all()

OUTPUT

Default Sort Type   :  totaltime
Default Sort Order  :  desc
Default Clock Type  :  wall
Yappi Memoery Usage :  134808

================ Func Stats ===================

Clock type: WALL
Ordered by: ncall, asc

name                                  ncall  tsub      ttot      tavg
yappi_ex2.py:8 get_sum_of_list        1      0.009664  0.050853  0.050853
builtins.next                         1      0.000000  0.000000  0.000000
.. _GeneratorContextManager.__exit__  1      0.000000  0.000000  0.000000
yappi_ex2.py:4 add                    10000  0.001418  0.001418  0.000000
..method 'append' of 'list' objects>  10000  0.001230  0.001230  0.000000
..on3.7/random.py:218 Random.randint  20000  0.006829  0.038541  0.000002
..3.7/random.py:174 Random.randrange  20000  0.011719  0.031712  0.000002
...7/random.py:224 Random._randbelow  20000  0.013457  0.019993  0.000001
..hod 'bit_length' of 'int' objects>  20000  0.002394  0.002394  0.000000
..bits' of '_random.Random' objects>  25697  0.004142  0.004142  0.000000

Example 3: Saving Profiling Results to A File and Loading It

As a part of this example, we'll explain how we can store profiling results to an output file and then reload it again.

  • YFuncStats.save() - This method accepts filename to which we can store profiling results.
  • YFuncStats.add() - This method accepts file name from which to load profiling results.
  • YFuncStats.clear() - This method clears current profiling results present in it.

Below we have explained the usage of the above-mentioned methods with a simple example.

CODE

import random
import yappi

def add(a, b):
    return a+b


def get_sum_of_list():
    final_list = []
    for i in range(10000):
        rand1 = random.randint(1,100)
        rand2 = random.randint(1,100)
        out = add(rand1, rand2)
        final_list.append(out)
    return final_list


yappi.set_clock_type("WALL")

with yappi.run(builtins=True):
    l = get_sum_of_list()
    #    print(l)

print("Default Clock Type  : ", yappi.get_clock_type())
print("Yappi Memory Usage : ", yappi.get_mem_usage())

stats = yappi.get_func_stats()

print("Saving Stats to output File")
stats.save("yappi_ex3.out")

print("Clearing existing Stats ")
stats.clear()

print("Checking Current Stats After Clearing Stats")
stats.print_all()

print("Loading Stats From File")
stats.add("yappi_ex3.out")

print("Loaded Stats")
stats.print_all()

OUTPUT

Default Clock Type  :  wall
Yappi Memory Usage :  134808
Saving Stats to output File
Clearing existing Stats
Checking Current Stats After Clearing Stats
Loading Stats From File
Loaded Stats

Clock type: WALL
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
yappi_ex3.py:8 get_sum_of_list        1      0.009884  0.051883  0.051883
..on3.7/random.py:218 Random.randint  20000  0.007478  0.039296  0.000002
..3.7/random.py:174 Random.randrange  20000  0.011625  0.031818  0.000002
...7/random.py:224 Random._randbelow  20000  0.013642  0.020193  0.000001
..bits' of '_random.Random' objects>  25673  0.004147  0.004147  0.000000
..hod 'bit_length' of 'int' objects>  20000  0.002404  0.002404  0.000000
yappi_ex3.py:4 add                    10000  0.001439  0.001439  0.000000
..method 'append' of 'list' objects>  10000  0.001264  0.001264  0.000000
.. _GeneratorContextManager.__exit__  1      0.000000  0.000000  0.000000
builtins.next                         1      0.000000  0.000000  0.000000

Example 4: Profiling using start() and stop() Functions

As a part of this function, we'll explain how we can profile a particular part of the code by using the start() and stop() yappi function.

  • yappi.start() - This function informs yappi to start profiling python code. It accepts a list of arguments mentioned below.
    • builtins - It’s a boolean representing whether you want to profile builtins or not. The default is False.
    • profile_threads - It’s a boolean representing whether you want to profile all threads or just the main thread. The default is True means profiles all threads.
    • profile_greenlets - It’s a boolean representing whether to profile greenlets or not.
  • yappi.stop() - This function informs yappi to stop profiling python code.
  • yappi.is_running() - This function returns boolean specifying whether yappi is profiling code or not.
  • yappi.clear_stats() - This function informs yappi to clear profiling data that it has gathered till now.

CODE

import random
import yappi

def add(a, b):
    return a+b


def get_sum_of_list():
    final_list = []
    for i in range(10000):
        rand1 = random.randint(1,100)
        rand2 = random.randint(1,100)
        out = add(rand1, rand2)
        final_list.append(out)
    return final_list


yappi.start()

l = get_sum_of_list()
print("Is Profiling Running : ", yappi.is_running())

yappi.stop()

print("Is Profiling Running : ", yappi.is_running())
print("========== Func Stats ==============")
yappi.get_func_stats().print_all()
print("\n========== Thread Stats ==============")
yappi.get_thread_stats().print_all()

yappi.clear_stats()

yappi.start(builtins=True, profile_threads=False)

l = get_sum_of_list()

yappi.stop()

print("========== Func Stats =================")
yappi.get_func_stats().print_all()

OUTPUT

Is Profiling Running :  True
Is Profiling Running :  False
========== Func Stats ==============

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
yappi_ex4.py:8 get_sum_of_list        1      0.029489  0.168133  0.168133
..on3.7/random.py:218 Random.randint  20000  0.025782  0.126814  0.000006
..3.7/random.py:174 Random.randrange  20000  0.030228  0.101032  0.000005
...7/random.py:224 Random._randbelow  20000  0.043764  0.070804  0.000004
yappi_ex4.py:4 add                    10000  0.005995  0.005995  0.000001

========== Thread Stats ==============

name           id     tid              ttot      scnt
_MainThread    0      140700123936576  0.168184  1
========== Func Stats =================

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
yappi_ex4.py:8 get_sum_of_list        1      0.025540  0.147629  0.147629
..on3.7/random.py:218 Random.randint  20000  0.022682  0.111618  0.000006
..3.7/random.py:174 Random.randrange  20000  0.026678  0.088935  0.000004
...7/random.py:224 Random._randbelow  20000  0.038397  0.062257  0.000003
..bits' of '_random.Random' objects>  25748  0.013910  0.013910  0.000001
..hod 'bit_length' of 'int' objects>  20000  0.009951  0.009951  0.000000
yappi_ex4.py:4 add                    10000  0.005356  0.005356  0.000001
..method 'append' of 'list' objects>  10000  0.005115  0.005115  0.000001

Example 5: Profiling Multi-Threaded Code

As a part of this example, we have explained how we can profile multi-threaded code using yappi. We have used the same code that we had used when explaining the profiling of multi-threaded code using yappi from command prompt/shell.

CODE

import time
import random
from concurrent.futures import ThreadPoolExecutor
import yappi


def slow_random_gen():
    time.sleep(2)
    val = random.randint(1,100)
    return val

def main_func():
    ppe = ThreadPoolExecutor(max_workers=3)
    futures = []

    futures.append(ppe.submit(slow_random_gen))
    futures.append(ppe.submit(slow_random_gen))
    futures.append(ppe.submit(slow_random_gen))

    print([future.result() for future in futures])


yappi.start()

main_func()

yappi.stop()

#print("========== Func Stats ==============")
#yappi.get_func_stats().print_all()  
print("\n========== Thread Stats ==============")
yappi.get_thread_stats().print_all()

yappi.clear_stats()


yappi.start(profile_threads=False)

main_func()

yappi.stop()

print("\n========== Thread Stats ==============")
yappi.get_thread_stats().print_all()

OUTPUT

[55, 77, 43]

========== Thread Stats ==============

name           id     tid              ttot      scnt
_MainThread    0      139640863278912  0.001367  6
Thread         1      139640835811072  0.000469  2
Thread         3      139640819025664  0.000438  2
Thread         2      139640827418368  0.000436  2
[47, 76, 59]

========== Thread Stats ==============

name           id     tid              ttot      scnt
_MainThread    0      139640863278912  0.002273  1

Example 6: Profiling Using @profile Decorator

Our sixth and last example explains how we can use the @profile decorator to inform yappi to profile only that function from the script.

  • profile() - This function can be used as a decorator on another function and all calls to that function will be profiled by yappi. It accepts arguments named clock_type which can be set to CPU or wall and profile_builtins which can be set to True or False informing yappi whether to profile built-in functions or not.

Below we have explained with a simple example, how we can use this decorator.

CODE

import yappi

def add(a, b):
    return a+b

@yappi.profile(clock_type="wall", profile_builtins=False)
def get_sum_of_list():
    final_list = []
    for i in range(10000):
        out = add(i, i)
        final_list.append(out)
    return final_list

l = get_sum_of_list()

OUTPUT

Executed in 0.037366 WALL clock seconds

Clock type: WALL
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg
yappi_ex6.py:6 get_sum_of_list        1      0.025634  0.037346  0.037346
yappi_ex6.py:3 add                    100..  0.006410  0.006410  0.000000
Sunny Solanki  Sunny Solanki

YouTube Subscribe Comfortable Learning through Video Tutorials?

If you are more comfortable learning through video tutorials then we would recommend that you subscribe to our YouTube channel.

Need Help Stuck Somewhere? Need Help with Coding? Have Doubts About the Topic/Code?

When going through coding examples, it's quite common to have doubts and errors.

If you have doubts about some code examples or are stuck somewhere when trying our code, send us an email at coderzcolumn07@gmail.com. We'll help you or point you in the direction where you can find a solution to your problem.

You can even send us a mail if you are trying something new and need guidance regarding coding. We'll try to respond as soon as possible.

Share Views Want to Share Your Views? Have Any Suggestions?

If you want to

  • provide some suggestions on topic
  • share your views
  • include some details in tutorial
  • suggest some new topics on which we should create tutorials/blogs
Please feel free to contact us at coderzcolumn07@gmail.com. We appreciate and value your feedbacks. You can also support us with a small contribution by clicking DONATE.


Subscribe to Our YouTube Channel

YouTube SubScribe

Newsletter Subscription