React Conf '18 - I'm Hooked

28 Oct 2018

I am flying home from React Conf 2018, hosted at the Westin Hotel in Lake Las Vegas. Some of the key takeaways of this year’s conference:

Hooks are coming

The conference started out strong with Dan Abramov demonstrating new React hooks to the crowd. I heard many, many audible gasps from the crowd as Dan walked us through using hooks to replace traditional React classes. I had the feeling that I was watching a new version of React being demoed right before our eyes. Although the API is unstable and experimental at this point (it’s just an RFC right now), it is immediately clear that this is the future of React. I am very grateful I was there to hear the announcement in person.

So, what are hooks going to do for you? Well, first of all, you’re going to be writing a lot less boilerplate class code. React 16.7 (alpha) allows us to use functions rather than classes as our components.

State management will be handled by React’s useState (docs) function - I will say that I felt a little strange about the ergonomics of useState (you have to call it sequentially, ordering matters). I think that for complex components that require lots of state updates, I would still write traditional classes. This useState pattern seems best suited for simpler components that only hold a couple values in state.

Instead of manually tracking side effects of components using componentDidMount and componentWillUnmount we can use (no pun intended) the new useEffect (docs) functions that React provides in 16.7 Alpha. This is probably the most promising feature I saw from Dan Abramov’s presentation.

Rather than segmenting application logic throughout these various lifecycle components:

  componentDidMount(){
    // Do something here because we mounted our component
    add_event_listener('foo_event')
  }
  componentWillUnmount() {
    // Make sure we unload whatever we did when we mounted!
    remove_event_listener('foo_event')
  }

We will now be able to import the useEffect function and use it like this:

export default function () {
    useEffect(() => {
        // Do a thing here.. maybe add an event listener
        add_event_listener('foo_event')

        // If we want the event listener to be removed when our component is unmounted
        // We just need to return an anonymous function containing the actions we want performed
        return () => {
            remove_event_listener('foo_event')
        }
    )
}

That’s really it. It may seem a bit magical (it certainly feels like it), but React will now perform the same duties that it used to, but with all of the logic nicely grouped together! This makes writing and debugging side effects much easier.

One more big change.

One of the worst parts of React was dealing with boilerplate code, particularly when it comes to shouldComponentUpdate. shouldComponentUpdate is rarely used for much more than checking if the prevProps != nextProps based on some criteria.

This is some very typical shouldComponentUpdate boilerplate that I’m sure you’re familiar with:

    shouldComponentUpdate(nextProps) {
        if (this.props.id !== nextProps.id) {
            return true
        }
        return false
    }

We are just checking if the incoming props have a different id than the props we already have. This is a fairly standard check for a lot of React components. What if React could just diff our props for us, and only update when necessary?

// Example taken from https://reactjs.org/docs/hooks-reference.html#conditionally-firing-an-effect
useEffect(
  () => {
    const subscription = props.source.subscribe();
    return () => {
      subscription.unsubscribe();
    };
  },
  [props.source], // Only run if the props.source values changes
);

Did I mention that when you use function components, there is no more binding to this? That alone is a compelling reason to begin exploring using React Hooks.

React Native isn’t there… yet

I really enjoyed James Long’s talk “Go Ahead, Block the Main Thread”, where he argued against a lot of common wisdom regarding Javascript. James talked about the viral impact of async functions - once a single function is async, everything in the codebase eventually follows suit. That’s never personally been a problem for me (I greatly enjoy using the asynchronous features of JS).

James argued that the asynchronous nature of React Native’s interaction with native API’s was harming UX. He showed some compelling examples of janky scrollng behavior that occurs when the React Native asynchronous processes fall behind the screen render.

His solution: Block the main thread. Don’t let other tasks interrupt crucial animation rendering. What’s the best way to do that? Get rid of async, and allow synchronous flow between native API’s and React Native.

GraphQL is in vogue

Speaking of talks I enjoyed, I greatly enjoyed Conor Hasting’s talk about GraphQL.

In a typical REST API setup, a consumer requests data from an endpoint. The consumer has little to no control over what is delivered to them. To use Conor’s analogy, it’s like calling a pizza parlor for pizza delivery, and since they don’t know what toppings you like, they put 40 different toppings on the pizza, deliver it to your house, and tell you to pick off the ingredients you don’t like.

When you’re working on a front-end application and constantly consuming API’s for varying amounts of data, this can get exhausting. Want to get only the id and timestamp of a given set of rows? Too f’ing bad. Now your front-end application is stuck having to munge, filter, and extract data, even though we know exactly what we want. It’s like calling the pizza parlor, asking for pepperoni, and getting 40 toppings.

GraphQL seeks to enforce the concept of getting only what you need, when you need it. This concept is not limited to any sort of technology stack or implementation - it is simply (in my eyes) a philosophy of API design. With GraphQL, your frontend can intelligently query the API for only the data it wants.

This saves time in two huge ways:

  1. Less data over the wire. Your API is no longer attempting to cram unnecessary information into a response.
  2. Less processing/filtering by the front-end. Your front-end doesn’t really need to know or care about how the API works. It just wants some data.

Good Captioning

As someone who has a hard time hearing, I really, really appreciated the real-time captions provided by the conference. They were incredibly precise, accurate, and they made my conference experience a lot better. I am used to only hearing 50-60% of a speaker’s talk, and I really loved being able to look to the caption monitors and follow along.


Advanced Django and Python Performance

24 Oct 2018

My latest work project has involved writing a custom Django API from scratch. Due to the numerous business logic and front-end requirements, something like Django Restful Framework wasn’t really a great option. I learned a great deal about the finer points of Python and Django performance while delivering an API capable of delivering thousands of results quickly.

I’ve consolidated some of my tips below.

Django

Model Managers are useful - but beware of chaining them with other queries

Be careful using model managers, especially when working with Django Prefetch data. You will incur additional lookup queries for the operations that your manager performs, as well as any other operations your manager performs on the data (exclude, order_by, filter, etc.).

Avoid bringing Python into it whenever possible

Do everything you can with properly written Models, queries, and prefetch objects. Once you start using Python, you will significantly impact the performance of your application.

Django is fast. Databases are fast. Python is slow.

Learning to use select_related and prefetch_related will save you a ton of time and debugging. It will also improve your query speeds! As I mentioned above, be careful mixing Model managers with these utilities - also, whenever you begin introducing multiple relationships in a query, you will want to use distinct() and order_by(). Having said that…

Watch out for distinct() gotchas

If you are using advanced Django queries that span multiple relationships, you may notice that duplicate rows are returned. No problem, we’ll just call .distinct() on the queryset, right?

If you only call distinct(), and you forget to call order_by() on your queryset, you will still receive duplicate results! This is a known Django “thing” - beware.

"When you specify field names, you must provide an order_by() in the QuerySet, and the fields in order_by() must start with the fields in distinct(), in the same order."
- Django Docs

Profile your Django queries

You can’t fix what you don’t measure. Make sure DEBUG=True in your Django settings.py file, and then drop this snippet into your code to output the queries being run.

from django.db import connection

# Add this block after your queries have been executed
if len(connection.queries) > 0:
    count, time = (0, 0)
    for query in connection.queries:
        count += 1
        print "%s: %s" % (count, query)
        time += float(query['time'])
    print 'Total queries: %s' % count
    print 'Total time: %s' % time

Python

Use map when performance matters AND the functions are complex AND you are using named functions. Use list comprehensions for everything else.

map is a built-in function written in C. Using map produces performance benefits over using list comprehensions in certain cases.

Please note that if you consume an anonymous lambda as your map function, rather than a named function, you lose the optimization benefits of map and it will in fact be much slower than an equivalent list comprehension. I will give you an example of this gotcha below.

def map_it(arr):
    return map(square_entry, arr)

def square_entry(x):
    return x ** 2

def list_comp(arr):
    return [square_entry(x) for x in arr]

def list_comp_lambda(arr):
    return [x ** 2 for x in arr]

def for_loop(arr):
    response = []
    a = response.append
    for i in arr:
        a(i ** 2)
    return response

To test the performance of these functions, we create an array with 10,000 numbers, and go through the array squaring each value. Pretty simple stuff. Check out the wild differences in runtime and performance:

  1. List Comprehension with anonymous lambda: 5 function calls in 0.001 seconds
  2. For Loop: 10005 function calls in 0.048 seconds
  3. List Comprehension using named function: 10005 function calls in 0.049 seconds
  4. map with named function: 10006 function calls in 0.050 seconds

Moral of the story? If you are doing simple list operations, use list comprehensions with anonymous lambdas. They are faster, more readable, and more pythonic.

When you’re munging complex data in Python, it’s a good idea to handle the data modification in a named function and then use map to call that function. You must always profile your code before and after using map to ensure that you are actually gaining performance and not losing it!

You might be asking so, when should I use map?

A good candidate for map is any long or complex function that will perform conditional operations on the provided arguments. map functions are great for iterating through objects and assigning properties based on data attributes, for example.

Here’s an example of map being significantly faster than list comprehensions (shamelessly taken from Stack Overflow):

$ python -mtimeit -s'xs=range(10)' 'map(hex, xs)'
100000 loops, best of 3: 4.86 usec per loop
$ python -mtimeit -s'xs=range(10)' '[hex(x) for x in xs]'
100000 loops, best of 3: 5.58 usec per loop

Abuse try/except when necessary - but be careful

If you’re using inline try/except statements (where it’s no big deal if the try block fails), just attempt to do the thing you want to do, rather than using extraneous if statements.

Here’s some sample code and real profiling results to guide your decisions.

import os
import profile
import pstats

# This is a typical example of extraneous if statements
def get_from_array_slow(array, index):
    try:
        # A typical `if` statement here might check to make sure
        # That our array is long enough for the index to be valid
        # A perfectly reasonable statement, right?
        if len(array) > index:
            # Unfortunately, we incur an unnecessary performance penalty due to calling len()
            return array[index]
        else:
            return None
    except:
        return None

# This is functionally the same at runtime,
# but without the additional len() operation
def get_from_array_fast(array, index):
    try:
        return array[index]
    except:
        return None


NUM_TRIALS = 10000

def with_if():
    for i in xrange(0, NUM_TRIALS):
        get_from_array_slow([], 99)  # Out of index

def without_if():
    for i in xrange(0, NUM_TRIALS):
        get_from_array_fast([], 99)  # Out of index

# This is a simple way of using the profile module available within Python
def profileIt(func_name):
    tmp_file = 'profiler'
    output_file = 'profiler'
    run_str = '%s()' % func_name
    tf = '%s_%s_tmp.tmp' % (tmp_file, func_name)
    of = '%s_%s_output.log' % (output_file, func_name)
    profile.run(run_str, tf)
    p = pstats.Stats(tf)
    p.sort_stats('tottime').print_stats(30)  # Print stats to console
    with open(of, 'w') as stream:  # Save to file
        stats = pstats.Stats(tf, stream=stream)
        stats.sort_stats('tottime').print_stats()
    os.remove(tf)  # Remove the tmp file

profileIt('with_if')
profileIt('without_if')

Our profiler results are below - using an if took 0.098 seconds - using only try/except shaved off one-third of the compute time, down to 0.065 seconds

profiler_with_if_tmp.tmp

20004 function calls in 0.098 seconds

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10000    0.049    0.000    0.073    0.000 profile_django.py:27(get_from_array_slow)
    1    0.025    0.025    0.098    0.098 profile_django.py:51(with_if)
10000    0.024    0.000    0.024    0.000 :0(len)
    1    0.000    0.000    0.000    0.000 :0(setprofile)
    1    0.000    0.000    0.098    0.098 profile:0(with_if())
    1    0.000    0.000    0.098    0.098 <string>:1(<module>)
    0    0.000             0.000          profile:0(profiler)

--------

profiler_without_if_tmp.tmp

10004 function calls in 0.065 seconds

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10000    0.032    0.000    0.032    0.000 profile_django.py:41(get_from_array_fast)
    1    0.032    0.032    0.064    0.064 profile_django.py:56(without_if)
    1    0.000    0.000    0.000    0.000 :0(setprofile)
    1    0.000    0.000    0.065    0.065 profile:0(without_if())
    1    0.000    0.000    0.064    0.064 <string>:1(<module>)
    0    0.000             0.000          profile:0(profiler)

Notice that our function using if incurs twice as many function calls as our plain old try/except block.


AWS Certified Security - Specialty Exam Tips and Tricks

17 Aug 2018

If you’re planning on taking the AWS Security Specialty exam, I’ve compiled a quick list of tips that you may want to remember headed into the exam.

I passed the exam on August 18th, 2017. Before taking this exam, I held all three Associate certifications. This exam was harder than any Associate exam by far!

  1. Remember that Cloudwatch/any AWS service cannot monitor your EC2 filesystems without an agent installed!
  2. Know KMS inside and out - this includes API commands like Decrypt, viaService, etc.
  3. Know the KMS key deletion policies and the differences between imported key material and AWS managed keys.
  4. Understand how cross-account access to various resources works.
  5. I had a lot of questions asking how to stop attacks from moving horizontally across EC2 instances in a subnet. Most of the time you need to stop the instance and take a snapshot for forensic purposes. You also need to make sure that security groups in an Auto-Scaling Group do not allow for transmission between instances on the same tier.
  6. Understand the difference between AWS Config, Trusted Advisor, and Cloudtrail. They try to mix these up CONSTANTLY to trick you.
  7. Understand how AWS works to limit the “blast radius” of compromised keys in KMS, and the concept of perfect forward secrecy.
  8. Budget your time and flag questions that fluster you. Come back to them later.
  9. Use the test to take the test. Sometimes you will get a question that asks you about a property of an AWS service. Later in the test, you may find a question that references that exact property and gives you the correct answer.
  10. There are usually two blatantly incorrect answers, and two answers that could be right. Narrow down your choices.
  11. CloudHSM was not present on my exam, but questions about Kinesis and Athena were.

Training Materials I Used

Videos I Watched

Whitepapers I Read


Advanced Javascript Tips and Optimization

25 Apr 2018

I’ve been working with Javascript for a few years now. It’s a wonderfully strange language, and there’s a lot of ways you can start a project and write code.

Here is what I think I know, as of April 2018. This guidance is heavily biased by working with Electron/React/Node, all over the stack, over multiple types of projects.

You need types

Developing a modern JS application without typing is both painful and tedious. I helped write a 25k+ LOC Node codebase without any form of typing - let me tell you, when you are debugging a year-old function you wrote and try to piece together what the data structure must have been at the time - it’s nearly impossible.

Any codebase that goes beyond a toy/prototype level must immediately implement TypeScript, Flow, or some form of typing, even if it is just a ridiculously long comment above each function describing the expected inputs/outputs (don’t actually do this - oh wait, this basically describes working with Python).

Make business-logic strings constant variables

If you frequently find yourself writing code like this:

if (Animal.type === 'dog') {
    // woof
}

Let me give you some advice. One day, one of your amazing engineers is going to write dogg instead of dog, and you’re going to miss it because you’re too busy making coffee, and that dumb code is going to end up in production, and your users are going to be rudely greeted by a stupid cat (or an error) instead of an awesome dog when they click “SHOW ME DOGS” on your new app, DogCatHorseShow!.

Instead, try this. The first time you write a comparison like above, stop yourself, take a deep breath, and do this.

Create a file called constants.ts (you’re using TypeScript, right? :))

The file should look like this:

export const kDogType = 'dog'
export const kCatType = 'cat'
export const kHorseType = 'majestic filly'

And now, your business logic file should look like this:

import { kDogType } from './constants';
if (Animal.type === kDogType) {
    // woof
}

Now you and your co-workers can safely compare values and you know that you’re all working with the same strings. If this example seems contrived, I want you to know that I just spent a lot of time fixing errors related to followup vs follow-up.

Consolidate your business logic EARLY, or you’ll end up with important comparisons scattered everywhere and subtle errors permeating your code.

Don’t mix data types in arrays

Although the Chrome team is making admirable gains in performance here, it is ALWAYS slower to include mixed data types in an array.

// This is bad and slow
const slow = ['string', null, {}, 'this is bad', [ { a: 'b' } ], 10]

// This is fast and good
const fast = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

You can see this performance impact for yourself here: JSPerf

Don’t write “multi-purpose” functions

The V8 engine can’t optimize functions that are passed varying data types. If you really are trying to eke out every bit of performance, write pure functions that accept consistent data types, and make sure you only use them as intended!

Operations are monomorphic if the hidden classes of inputs are always the same - otherwise they are polymorphic, meaning some of the arguments can change type across different calls to the operation. - from Performance Tips for JavaScript in V8.

// This function seems innocent enough, right?
const add = (a, b) => a + b

add(1, 2)     // Returns 3. So far, so good
add('1', '2') // Returns '12'. This is getting bad
add({}, [])   // V8 throws its hands in the air and says "screw this"

Abusing functions like this, while convenient for the programmer, can lead to a nearly 50% reduction in speed.

Yikes!

Bundle operations that involve external resources

This one is probably a no-brainer, but consider batching any events or actions that rely on external resources to complete.

When you’re updating a JSON file or a database somewhere, you’re going to deal with incredible asynchronous headaches if your code has a theme of “each individual action manages itself.”

Consider writing your code in a way that lends itself to batching of updates - whether you’re doing HTTP requests, file updates, DOM reflows, or DB calls. Focusing on batching early in your codebase will allow you to scale faster when you start handling more actions and events.

Keep functions small

You should already do this just for cleanliness and understandability, but you should also know that the length of your written function (including comments(!!)) has an impact on performance.

The takeaway here should still be to keep functions small. At the moment we still have to avoid over-commenting (and even whitespace) inside functions. - from v8-perf discussion on Github

Write tests early and often

In a greenfield codebase, every feature added to the codebase should have an accompanying test. This prevents code rot from setting in, and it keeps the codebase style reasonable and testable from an early stage.

Especially when you’re doing open-source driven work, you’re frequently going to have subtle breakages when you upgrade/update your npm packages. Automated tests are the only way you’re going to catch it - yes, even for your weekend project that you’ll definitely never pick up again.

Look, I’m not saying you gotta go all TDD on your personal projects, but… one day you’ll want to go back to that project and use it, but you won’t be able to touch the code without breaking it. Been there, done that, learned my lesson.