If you’re planning on taking the AWS Solutions Architect Professional exam, I’ve compiled a quick list of tips that you may want to remember headed into the exam.
I passed the exam on November 24th, 2018. Before taking this exam, I held all three Associate certifications and the Security Specialty certification. I passed with an 80% score, and it took 69 minutes.
This exam is very difficult - on par with the Security Specialty exam! You will not accidentally pass this exam. :)
I am flying home from React Conf 2018, hosted at the Westin Hotel in Lake Las Vegas. Some of the key takeaways of this year’s conference:
The conference started out strong with Dan Abramov demonstrating new React hooks to the crowd. I heard many, many audible gasps from the crowd as Dan walked us through using hooks to replace traditional React classes. I had the feeling that I was watching a new version of React being demoed right before our eyes. Although the API is unstable and experimental at this point (it’s just an RFC right now), it is immediately clear that this is the future of React. I am very grateful I was there to hear the announcement in person.
So, what are hooks going to do for you? Well, first of all, you’re going to be writing a lot less boilerplate class code. React 16.7 (alpha) allows us to use functions rather than classes as our components.
State management will be handled by React’s useState
(docs) function - I will say that I felt a little strange about the ergonomics of useState
(you have to call it sequentially, ordering matters). I think that for complex components that require lots of state updates, I would still write traditional classes. This useState
pattern seems best suited for simpler components that only hold a couple values in state.
Instead of manually tracking side effects of components using componentDidMount
and componentWillUnmount
we can use (no pun intended) the new useEffect
(docs) functions that React provides in 16.7 Alpha. This is probably the most promising feature I saw from Dan Abramov’s presentation.
Rather than segmenting application logic throughout these various lifecycle components:
We will now be able to import the useEffect
function and use it like this:
That’s really it. It may seem a bit magical (it certainly feels like it), but React will now perform the same duties that it used to, but with all of the logic nicely grouped together! This makes writing and debugging side effects much easier.
One more big change.
One of the worst parts of React was dealing with boilerplate code, particularly when it comes to shouldComponentUpdate
. shouldComponentUpdate
is rarely used for much more than checking if the prevProps
!= nextProps
based on some criteria.
This is some very typical shouldComponentUpdate
boilerplate that I’m sure you’re familiar with:
We are just checking if the incoming props have a different id
than the props we already have. This is a fairly standard check for a lot of React components. What if React could just diff our props for us, and only update when necessary?
Did I mention that when you use function components, there is no more binding to this
? That alone is a compelling reason to begin exploring using React Hooks.
I really enjoyed James Long’s talk “Go Ahead, Block the Main Thread”, where he argued against a lot of common wisdom regarding Javascript. James talked about the viral impact of async
functions - once a single function is async, everything in the codebase eventually follows suit. That’s never personally been a problem for me (I greatly enjoy using the asynchronous features of JS).
James argued that the asynchronous nature of React Native’s interaction with native API’s was harming UX. He showed some compelling examples of janky scrolling behavior that occurs when the React Native asynchronous processes fall behind the screen render.
His solution: Block the main thread. Don’t let other tasks interrupt crucial animation rendering. What’s the best way to do that? Get rid of async
, and allow synchronous flow between native API’s and React Native.
Speaking of talks I enjoyed, I greatly enjoyed Conor Hasting’s talk about GraphQL.
In a typical REST API setup, a consumer requests data from an endpoint. The consumer has little to no control over what is delivered to them. To use Conor’s analogy, it’s like calling a pizza parlor for pizza delivery, and since they don’t know what toppings you like, they put 40 different toppings on the pizza, deliver it to your house, and tell you to pick off the ingredients you don’t like.
When you’re working on a front-end application and constantly consuming API’s for varying amounts of data, this can get exhausting. Want to get only the id
and timestamp
of a given set of rows? Too f’ing bad. Now your front-end application is stuck having to munge, filter, and extract data, even though we know exactly what we want. It’s like calling the pizza parlor, asking for pepperoni, and getting 40 toppings.
GraphQL seeks to enforce the concept of getting only what you need, when you need it. This concept is not limited to any sort of technology stack or implementation - it is simply (in my eyes) a philosophy of API design. With GraphQL, your frontend can intelligently query the API for only the data it wants.
This saves time in two huge ways:
As someone who has a hard time hearing, I really, really appreciated the real-time captions provided by the conference. They were incredibly precise, accurate, and they made my conference experience a lot better. I am used to only hearing 50-60% of a speaker’s talk, and I really loved being able to look to the caption monitors and follow along.
My latest work project has involved writing a custom Django API from scratch. Due to the numerous business logic and front-end requirements, something like Django Restful Framework wasn’t really a great option. I learned a great deal about the finer points of Django performance while delivering an API capable of delivering thousands of results quickly.
I’ve consolidated some of my tips below.
Be careful using model managers, especially when working with Django Prefetch
data. You will incur additional lookup queries for the operations that your manager performs, as well as any other operations your manager performs on the data (exclude
, order_by
, filter
, etc.).
Do everything you can with properly written Models, queries, and prefetch objects. Once you start using Python, you will significantly impact the performance of your application.
Django is fast. Databases are fast. Python is slow.
select_related
and prefetch_related
Learning to use select_related
and prefetch_related
will save you a ton of time and debugging. It will also improve your query speeds! As I mentioned above, be careful mixing Model managers with these utilities - also, whenever you begin introducing multiple relationships in a query, you will want to use distinct()
and order_by()
. Having said that…
distinct()
gotchasIf you are using advanced Django queries that span multiple relationships, you may notice that duplicate rows are returned. No problem, we’ll just call .distinct()
on the queryset, right?
If you only call distinct()
, and you forget to call order_by()
on your queryset, you will still receive duplicate results! This is a known Django “thing” - beware.
"When you specify field names, you must provide an order_by() in the QuerySet, and the fields in order_by() must start with the fields in distinct(), in the same order."
- Django Docs
You can’t fix what you don’t measure. Make sure DEBUG=True
in your Django settings.py
file, and then drop this snippet into your code to output the queries being run.
Here are some performance hints I learned from doing a deep dive into Python for a work project.
map
when performance matters AND the functions are complex AND you are using named functions. Use list comprehensions for everything else.map
is a built-in function written in C. Using map
produces performance benefits over using list comprehensions in certain cases.
Please note that if you consume an anonymous lambda as your map
function, rather than a named function, you lose the optimization benefits of map
and it will in fact be much slower than an equivalent list comprehension. I will give you an example of this gotcha below.
To test the performance of these functions, we create an array with 10,000 numbers, and go through the array squaring each value. Pretty simple stuff. Check out the wild differences in runtime and performance:
map
with named function: 10006 function calls in 0.050 secondsMoral of the story? If you are doing simple list operations, use list comprehensions with anonymous lambdas. They are faster, more readable, and more pythonic.
When you’re munging complex data in Python, it’s a good idea to handle the data modification in a named function and then use map to call that function. You must always profile your code before and after using map
to ensure that you are actually gaining performance and not losing it!
You might be asking so, when should I use map?
A good candidate for map
is any long or complex function that will perform conditional operations on the provided arguments. map
functions are great for iterating through objects and assigning properties based on data attributes, for example.
Here’s an example of map
being significantly faster than list comprehensions (shamelessly taken from Stack Overflow):
If you’re using inline try/except
statements (where it’s no big deal if the try
block fails), just attempt to do the thing you want to do, rather than using extraneous if
statements.
Here’s some sample code and real profiling results to guide your decisions.
Our profiler results are below - using an if
took 0.098 seconds - using only try/except
shaved off one-third of the compute time, down to 0.065 seconds
Notice that our function using if
incurs twice as many function calls as our plain old try/except
block.