Advanced Django and Python Performance

24 Oct 2018

My latest work project has involved writing a custom Django API from scratch. Due to the numerous business logic and front-end requirements, something like Django Restful Framework wasn’t really a great option. I learned a great deal about the finer points of Python and Django performance while delivering an API capable of delivering thousands of results quickly.

I’ve consolidated some of my tips below.

Django

Model Managers are useful - but beware of chaining them with other queries

Be careful using model managers, especially when working with Django Prefetch data. You will incur additional lookup queries for the operations that your manager performs, as well as any other operations your manager performs on the data (exclude, order_by, filter, etc.).

Avoid bringing Python into it whenever possible

Do everything you can with properly written Models, queries, and prefetch objects. Once you start using Python, you will significantly impact the performance of your application.

Django is fast. Databases are fast. Python is slow.

Learning to use select_related and prefetch_related will save you a ton of time and debugging. It will also improve your query speeds! As I mentioned above, be careful mixing Model managers with these utilities - also, whenever you begin introducing multiple relationships in a query, you will want to use distinct() and order_by(). Having said that…

Watch out for distinct() gotchas

If you are using advanced Django queries that span multiple relationships, you may notice that duplicate rows are returned. No problem, we’ll just call .distinct() on the queryset, right?

If you only call distinct(), and you forget to call order_by() on your queryset, you will still receive duplicate results! This is a known Django “thing” - beware.

"When you specify field names, you must provide an order_by() in the QuerySet, and the fields in order_by() must start with the fields in distinct(), in the same order."
- Django Docs

Profile your Django queries

You can’t fix what you don’t measure. Make sure DEBUG=True in your Django settings.py file, and then drop this snippet into your code to output the queries being run.

from django.db import connection

# Add this block after your queries have been executed
if len(connection.queries) > 0:
    count, time = (0, 0)
    for query in connection.queries:
        count += 1
        print "%s: %s" % (count, query)
        time += float(query['time'])
    print 'Total queries: %s' % count
    print 'Total time: %s' % time

Python

Use map when performance matters AND the functions are complex AND you are using named functions. Use list comprehensions for everything else.

map is a built-in function written in C. Using map produces performance benefits over using list comprehensions in certain cases.

Please note that if you consume an anonymous lambda as your map function, rather than a named function, you lose the optimization benefits of map and it will in fact be much slower than an equivalent list comprehension. I will give you an example of this gotcha below.

def map_it(arr):
    return map(square_entry, arr)

def square_entry(x):
    return x ** 2

def list_comp(arr):
    return [square_entry(x) for x in arr]

def list_comp_lambda(arr):
    return [x ** 2 for x in arr]

def for_loop(arr):
    response = []
    a = response.append
    for i in arr:
        a(i ** 2)
    return response

To test the performance of these functions, we create an array with 10,000 numbers, and go through the array squaring each value. Pretty simple stuff. Check out the wild differences in runtime and performance:

  1. List Comprehension with anonymous lambda: 5 function calls in 0.001 seconds
  2. For Loop: 10005 function calls in 0.048 seconds
  3. List Comprehension using named function: 10005 function calls in 0.049 seconds
  4. map with named function: 10006 function calls in 0.050 seconds

Moral of the story? If you are doing simple list operations, use list comprehensions with anonymous lambdas. They are faster, more readable, and more pythonic.

When you’re munging complex data in Python, it’s a good idea to handle the data modification in a named function and then use map to call that function. You must always profile your code before and after using map to ensure that you are actually gaining performance and not losing it!

You might be asking so, when should I use map?

A good candidate for map is any long or complex function that will perform conditional operations on the provided arguments. map functions are great for iterating through objects and assigning properties based on data attributes, for example.

Here’s an example of map being significantly faster than list comprehensions (shamelessly taken from Stack Overflow):

$ python -mtimeit -s'xs=range(10)' 'map(hex, xs)'
100000 loops, best of 3: 4.86 usec per loop
$ python -mtimeit -s'xs=range(10)' '[hex(x) for x in xs]'
100000 loops, best of 3: 5.58 usec per loop

Abuse try/except when necessary - but be careful

If you’re using inline try/except statements (where it’s no big deal if the try block fails), just attempt to do the thing you want to do, rather than using extraneous if statements.

Here’s some sample code and real profiling results to guide your decisions.

import os
import profile
import pstats

# This is a typical example of extraneous if statements
def get_from_array_slow(array, index):
    try:
        # A typical `if` statement here might check to make sure
        # That our array is long enough for the index to be valid
        # A perfectly reasonable statement, right?
        if len(array) > index:
            # Unfortunately, we incur an unnecessary performance penalty due to calling len()
            return array[index]
        else:
            return None
    except:
        return None

# This is functionally the same at runtime,
# but without the additional len() operation
def get_from_array_fast(array, index):
    try:
        return array[index]
    except:
        return None


NUM_TRIALS = 10000

def with_if():
    for i in xrange(0, NUM_TRIALS):
        get_from_array_slow([], 99)  # Out of index

def without_if():
    for i in xrange(0, NUM_TRIALS):
        get_from_array_fast([], 99)  # Out of index

# This is a simple way of using the profile module available within Python
def profileIt(func_name):
    tmp_file = 'profiler'
    output_file = 'profiler'
    run_str = '%s()' % func_name
    tf = '%s_%s_tmp.tmp' % (tmp_file, func_name)
    of = '%s_%s_output.log' % (output_file, func_name)
    profile.run(run_str, tf)
    p = pstats.Stats(tf)
    p.sort_stats('tottime').print_stats(30)  # Print stats to console
    with open(of, 'w') as stream:  # Save to file
        stats = pstats.Stats(tf, stream=stream)
        stats.sort_stats('tottime').print_stats()
    os.remove(tf)  # Remove the tmp file

profileIt('with_if')
profileIt('without_if')

Our profiler results are below - using an if took 0.098 seconds - using only try/except shaved off one-third of the compute time, down to 0.065 seconds

profiler_with_if_tmp.tmp

20004 function calls in 0.098 seconds

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10000    0.049    0.000    0.073    0.000 profile_django.py:27(get_from_array_slow)
    1    0.025    0.025    0.098    0.098 profile_django.py:51(with_if)
10000    0.024    0.000    0.024    0.000 :0(len)
    1    0.000    0.000    0.000    0.000 :0(setprofile)
    1    0.000    0.000    0.098    0.098 profile:0(with_if())
    1    0.000    0.000    0.098    0.098 <string>:1(<module>)
    0    0.000             0.000          profile:0(profiler)

--------

profiler_without_if_tmp.tmp

10004 function calls in 0.065 seconds

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10000    0.032    0.000    0.032    0.000 profile_django.py:41(get_from_array_fast)
    1    0.032    0.032    0.064    0.064 profile_django.py:56(without_if)
    1    0.000    0.000    0.000    0.000 :0(setprofile)
    1    0.000    0.000    0.065    0.065 profile:0(without_if())
    1    0.000    0.000    0.064    0.064 <string>:1(<module>)
    0    0.000             0.000          profile:0(profiler)

Notice that our function using if incurs twice as many function calls as our plain old try/except block.


AWS Certified Security - Specialty Exam Tips and Tricks

17 Aug 2018

If you’re planning on taking the AWS Security Specialty exam, I’ve compiled a quick list of tips that you may want to remember headed into the exam.

I passed the exam on August 18th, 2018. Before taking this exam, I held all three Associate-level certifications. This exam was harder than any Associate exam by far!

  1. Remember that Cloudwatch/any AWS service cannot monitor your EC2 filesystems without an agent installed!
  2. Know KMS inside and out - this includes API commands like Decrypt, viaService, etc.
  3. Know the KMS key deletion policies and the differences between imported key material and AWS managed keys.
  4. Understand how cross-account access to various resources works.
  5. I had a lot of questions asking how to stop attacks from moving horizontally across EC2 instances in a subnet. Most of the time you need to stop the instance and take a snapshot for forensic purposes. You also need to make sure that security groups in an Auto-Scaling Group do not allow for transmission between instances on the same tier.
  6. Understand the difference between AWS Config, Trusted Advisor, and Cloudtrail. They try to mix these up CONSTANTLY to trick you.
  7. Understand how AWS works to limit the “blast radius” of compromised keys in KMS, and the concept of perfect forward secrecy.
  8. Budget your time and flag questions that fluster you. Come back to them later.
  9. Use the test to take the test. Sometimes you will get a question that asks you about a property of an AWS service. Later in the test, you may find a question that references that exact property and gives you the correct answer.
  10. There are usually two blatantly incorrect answers, and two answers that could be right. Narrow down your choices.
  11. CloudHSM was not present on my exam, but questions about Kinesis and Athena were.

Training Materials I Used

Videos I Watched

Whitepapers I Read


Advanced Javascript Tips and Optimization

25 Apr 2018

I’ve been working with Javascript for a few years now. It’s a wonderfully strange language, and there’s a lot of ways you can start a project and write code.

Here is what I think I know, as of April 2018. This guidance is heavily biased by working with Electron/React/Node, all over the stack, over multiple types of projects.

You need types

Developing a modern JS application without typing is both painful and tedious. I helped write a 25k+ LOC Node codebase without any form of typing - let me tell you, when you are debugging a year-old function you wrote and try to piece together what the data structure must have been at the time - it’s nearly impossible.

Any codebase that goes beyond a toy/prototype level must immediately implement TypeScript, Flow, or some form of typing, even if it is just a ridiculously long comment above each function describing the expected inputs/outputs (don’t actually do this - oh wait, this basically describes working with Python).

Make business-logic strings constant variables

If you frequently find yourself writing code like this:

if (Animal.type === 'dog') {
    // woof
}

Let me give you some advice. One day, one of your amazing engineers is going to write dogg instead of dog, and you’re going to miss it because you’re too busy making coffee, and that dumb code is going to end up in production, and your users are going to be rudely greeted by a stupid cat (or an error) instead of an awesome dog when they click “SHOW ME DOGS” on your new app, DogCatHorseShow!.

Instead, try this. The first time you write a comparison like above, stop yourself, take a deep breath, and do this.

Create a file called constants.ts (you’re using TypeScript, right? :))

The file should look like this:

export const kDogType = 'dog'
export const kCatType = 'cat'
export const kHorseType = 'majestic filly'

And now, your business logic file should look like this:

import { kDogType } from './constants';
if (Animal.type === kDogType) {
    // woof
}

Now you and your co-workers can safely compare values and you know that you’re all working with the same strings. If this example seems contrived, I want you to know that I just spent a lot of time fixing errors related to followup vs follow-up.

Consolidate your business logic EARLY, or you’ll end up with important comparisons scattered everywhere and subtle errors permeating your code.

Don’t mix data types in arrays

Although the Chrome team is making admirable gains in performance here, it is ALWAYS slower to include mixed data types in an array.

// This is bad and slow
const slow = ['string', null, {}, 'this is bad', [ { a: 'b' } ], 10]

// This is fast and good
const fast = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

You can see this performance impact for yourself here: JSPerf

Don’t write “multi-purpose” functions

The V8 engine can’t optimize functions that are passed varying data types. If you really are trying to eke out every bit of performance, write pure functions that accept consistent data types, and make sure you only use them as intended!

Operations are monomorphic if the hidden classes of inputs are always the same - otherwise they are polymorphic, meaning some of the arguments can change type across different calls to the operation. - from Performance Tips for JavaScript in V8.

// This function seems innocent enough, right?
const add = (a, b) => a + b

add(1, 2)     // Returns 3. So far, so good
add('1', '2') // Returns '12'. This is getting bad
add({}, [])   // V8 throws its hands in the air and says "screw this"

Abusing functions like this, while convenient for the programmer, can lead to a nearly 50% reduction in speed.

Yikes!

Bundle operations that involve external resources

This one is probably a no-brainer, but consider batching any events or actions that rely on external resources to complete.

When you’re updating a JSON file or a database somewhere, you’re going to deal with incredible asynchronous headaches if your code has a theme of “each individual action manages itself.”

Consider writing your code in a way that lends itself to batching of updates - whether you’re doing HTTP requests, file updates, DOM reflows, or DB calls. Focusing on batching early in your codebase will allow you to scale faster when you start handling more actions and events.

Keep functions small

You should already do this just for cleanliness and understandability, but you should also know that the length of your written function (including comments(!!)) has an impact on performance.

The takeaway here should still be to keep functions small. At the moment we still have to avoid over-commenting (and even whitespace) inside functions. - from v8-perf discussion on Github

Write tests early and often

In a greenfield codebase, every feature added to the codebase should have an accompanying test. This prevents code rot from setting in, and it keeps the codebase style reasonable and testable from an early stage.

Especially when you’re doing open-source driven work, you’re frequently going to have subtle breakages when you upgrade/update your npm packages. Automated tests are the only way you’re going to catch it - yes, even for your weekend project that you’ll definitely never pick up again.

Look, I’m not saying you gotta go all TDD on your personal projects, but… one day you’ll want to go back to that project and use it, but you won’t be able to touch the code without breaking it. Been there, done that, learned my lesson.


Sea of Thieves - Couldn't install. We'll retry shortly

28 Mar 2018

If you’re getting the error "Couldn't install. We'll retry shortly" (error code 0x80070006) when trying to install Sea of Thieves through the Microsoft store, try this.

  1. Navigate to C:\Users\[your_user_name]\AppData\Local\Packages\Microsoft.WindowsStore_8wekyb3d8bbwe\LocalCache
  2. Delete all the folders & files in LocalCache - leave the LocalCache folder.
  3. Reboot and retry

If that doesn’t work, try the following:

  1. Open Windows Powershell as Admin
  2. Paste set-ExecutionPolicy RemoteSigned into the terminal and hit Enter.
  3. Press Y to accept.
  4. Paste Get-AppXPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppxManifest.xml"} (all one line) and hit Enter
  5. A lot of stuff will happen. Once it’s done, exit the Powershell terminal.
  6. Reboot and retry

Still not working?

  1. Open a CMD prompt as Admin
  2. Try the following commands one by one (paste + Enter)
    • dism /online /cleanup-image /startcomponentcleanup
    • sfc /scannow
  3. Reboot and retry

I haven’t tested this, but you can try creating a new user and change the new user account from standard to administrator, then login with the new user and test opening and updating store.

For Error 0x80070005 please refer here