CSS, Cats, and Mind Clutter

Writing Self-Feedback

I’ll be frank. I hate writing about myself, and yet, it’s that time of year again, where it needs to happen, or else you’ve just made it incredibly difficult for your manager to go to bat for you for rewards or a promotion.

Here’s how I approach things:

If I’ve been good, the work that I’ve done has been generating artifacts. As a software engineer it looks like PRs, closed epics, various sorts of written documentation, and even text chats. For the review period find your links. You don’t need to digest them yet, just put them in a document, the more the better.

Synthesize

Found all your links? Good, for the next part we try to find common themes. Did you ship something impactful that moved key metrics for the company? Did you help someone else grow? Do you have examples of how you’re performing at the next level for that promotion you’re aiming for? Do you have key skills you’d like to highlight?

Write these out in bullet points and associate the backing links.

Be blunt about your career goals

Reflect a bit and think of what you want to try next. Do you want to try the management track? Go back to being an individual contributor? Interested in that rotation that will help you grow? Is life busy, and you are you happy where you are? That’s perfectly fine too.

Or perhaps, you have no clue what you want? That’s normal, but life is short. Why not work a bit to figure out which direction to try? Genuinely chat with folks in other roles to see what makes you excited. Try things and keep track of what gives you energy and what doesn’t. If this sounds interesting, I’ve found the book Designing Your Life to be very helpful.

Whatever it is, write it clearly. Chances are that you may be using this for the looking forward section. Ideally your manager already knows what your career goals are from your 1:1 chats. If you don’t write this part down at all, don’t be surprised if opportunities don’t arrive. After all, your manager cannot read your mind and they may not be experienced enough or too busy to direct career chats. Be clear and blunt.

Draw the owl

Okay you’ve got this! Using your collected themes and goals, write your feedback.

I like doing this in an outside editor like google docs. It keeps a record of my self-feedback if I ever move on, and it also saves work much more reliability than whatever terrible feedback system your company has likely implemented.

If you like procrastinating, one popular option is writing an excessively long AI chat prompt to summarize this for you. I find that it’s faster to just write it myself, but you do you!

Summary

So there you have it:

  1. Find your work artifacts/links.
  2. Group them into themes.
  3. Clearly state your career goals. Your manager cannot read your mind.
  4. Draw the Owl (Just write it!).

Client Performance Profiling

Have you ever noticed a slow interaction in your React application and aren’t quite sure how to start solving it? We might think that performance problems are too hard to fix because it’s too difficult to understand, but just like with Debugging Hard Things, don’t despair! We can still use the same tools to help us with performance profiling, just like we do when tracking down hard to find bugs.

If we get stuck:

  • Take a pause with the beginner’s mindset. It’s okay to not know how something works. We can take some time to learn some of the basics to help us move forward.
  • Remember that computers aren’t magic! 🪄 There’s a reason why this is happening. If we’re having trouble figuring it out, it might be for a reason that’s on a layer that we’re not familiar with. 
  • Know that we can break big problems into smaller ones and work systematically to rule out or narrow down theories.

Specifically when tackling performance issues, we can also help narrow this down by:

  1. Picking a specific measurable scenario to improve.
  2. Verifying that we can consistently reproduce the scenario.
  3. Profiling to find top slow areas.
  4. Making a hypothesis and testing it.
  5. Repeat steps 3 and 4 until we are happy with performance gains. This might take a lot of trial and error before we get our ah-ha moment!

To dig into what this means, let’s explore this further under a lens of trying to fix a slow focus in the WordPress Block Editor.

Pick a Scenario to Improve

When attempting to improve performance, instead of an ambiguous goal of trying to make an application fast, it helps to pick something concrete to improve that we can measure.

The more specific it is, the more likely it is that we’ll be able to find a smaller fix or actionable path forward.

Let’s first explore to see if our performance scenario is a good problem to tackle. In our example, we have reports that focus is a bit slow on large posts, but only when the ListView sidebar is open! How curious!

To verify performance issues quickly it can help to exacerbate the test conditions. For example, if we think performance degrades with the size of a post, try adding a lot of content. In another scenario if we think an issue is pretty sensitive to lower powered devices, we can try artificially limiting CPU speed in the browser performance tab or using a real device to confirm results. If we think a specific component or piece of logic is to blame, try quickly commenting out the entire section. We wouldn’t check that in, but it’s a quick way of pinpointing the general issue!

For this example, I created a large post with ~900 blocks and tried focusing with List View open. As we can see in the video below we can see visible lag or delay for between a click and when the actual editor caret moves.

So keeping track of what we’ve done so far, we’ve:

  • Confirmed that the focus performance problem is specific and concrete ✅

Make Sure We Can Consistently Reproduce The Behavior

Another thing to check before we try any code improvements, is see if we can consistently reproduce the behavior. This usually means that we can think through of a way to consistently measure how long an action took as well.

Many performance optimizations may make code more complex, can unintentionally break functionality, and may even slow down the application when applied incorrectly. So by default we should not accept any performance improvements unless the change does demonstrate that things are better (fewer renders, less delay for user interaction, etc). To demonstrate that a change helps performance, we need to be able to measure how fast that action was performed before and after a proposed fix.

If possible it’s also better to use measurements that can be reproduced by others, ideally via some automation on your continuous integration loop. Relying on manually profiling and requiring others to know how to performance profile can be a recipe for misinterpretation. Even experienced folks may get it wrong sometimes.

So going back to our slow focus example, it’s been reported by others as an issue, and we just verified it using our large test post. It does this consistently.

In our example, the WordPress block editor does have automated performance tests. However if we watch what the tests do with the following commands, we can see that these tests run with the ListView component closed. We have a bit of work to do to update tests, but let’s circle back to this one later.

npm run test-performance -- --wordpress-base-url=<base> --wordpress-username=<username>--wordpress-password=<password> --puppeteer-interactive packages/e2e-tests/specs/performance/post-editor.test.js
 
npm run test-performance -- --wordpress-base-url=<base> --wordpress-username=<username>--wordpress-password=<password> --puppeteer-interactive packages/e2e-tests/specs/performance/site-editor.test.js

Let’s update our checklist:

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • The scenario is measured by existing performance tests ❌

Get Familiar with our Tools

Performance tooling improves at a good pace! This is really lovely ✨

So after we have a good scenario to profile, a great next step is to catch up on documentation for our tools and common gotchas, even if we’re somewhat familiar with performance profiling.

If we don’t know what tools are available, we can search for it. This is true for most problems. Have a memory leak? Start with a general query like “Debugging Memory Leaks” and refine with follow up searches for your particular tech stack. In our case, a quick search for “Debugging Client Performance React” currently leads to articles on browser developer tools and React developer tools.

In the following section, I’ll be walking through some basics with Chrome devtools (other modern browsers have nice tooling as well) and React devtools, but this information is likely to get out of date. The next time you debug a client performance issue, save yourself some time by going back to documentation to learn what’s available!

Profiling in Developer Tools

One great place to start is by verifying an issue in the browser performance tab.

As an aside please remember to verify profiling results in a production build! For applications that use React see this gist. Some issues may only appear in a development environment, or may behave differently in production.

Let’s open browser devtools and click on the Performance tab, once we’re ready, hit record, then perform the slow action. In our case, we can click on two blocks a few times in the editor. After we stop recording get a lot of information on what just happened.

Chrome Performance tab

What gets spit out may look a bit intimidating, but let’s dive deeper! One thing I recommend to do first is expand the performance window so we can get a better look at the information that’s provided. On this run I recorded with screenshots so I could get a better sense of what the editor was doing while tracing a particular function call.

In this case, we’ll want to zoom in on a slow focus event. We get hints of where this is with the long running task, the dip in FPS and the high CPU usage. Zooming in, we do indeed see that the focus call takes around 900ms to resolve.

Sometimes we might be able to spot a slow function specific to the component we can optimize, but in this case the call stack points at internal application state wiring and the React render workloop.

We’ll need to use a second tool to help see why this is taking up so much time!

React Devtool

Luckily for us, React has a browser extension called React Devtools.

To start, install React developer tools as a browser extension. Official documentation for React Devtools was a bit slim, but it does have an interactive tutorial!

Click on the profiler tab.

Then click on the gear in the top right corner:

We can now set some defaults. Since I know we’re looking at renders ~900ms let’s ignore anything below 300ms. This helps reduce the noise when viewing results. Also check “Record why each component rendered while profiling”.

In the Component tab, check “Always parse hook names from source”.

Next, let hit the start profiling button:

Perform the action we’re interesting in profiling. In this example I click on four different blocks in the main editor with ListView open. Click the profiling button again to stop recording.

The bars here may be colored differently. “Hotter” colors means it took more time to render and “cooler” colors less.

Now we can see four distinct events. Use the Flamegraph to see which root parent is causing the slowdown. In this case it’s ListView. The “Why did this render?” section shows that Hooks 5, 14, 16, 28, 29, 30, 46, 77, 98, and 104 have changed. It doesn’t look useful but with some extra work we can what those map to.

Where are the hook names? We’ll need to go back to the Components tab. We can see the numbered hooks in the hooks section. We may need to expand items to see all numbered hooks. Now we can match which hooks caused the update.

So profiling we see that ListView updates on block focus and it’s a slow render. We’re still not sure why this is happening.

It seems like we’re a bit stuck, but it’s not time to quit yet! Since we don’t know why this update is slow in React, let’s research what React is doing internally next.

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • The scenario is measured by existing performance tests ❌
  • Manually profiling shows that focusing on a block causes a slow ListView render. 🤔

Bonus: Figuring out why an object or array changed

Sometimes, we might also need to drill down into why an object or array prop changed for a React component. When this happens we can manually use some logging:

import { useRef } from 'react';
import { isEqual, union } from 'lodash';
function useLogIfChanged( name, value ) {
	const previous = useRef( value );
	if ( ! Object.is( previous.current, value ) ) {
		const diff = { old: previous.current, new: value };
		console.group( `${ name } changed:` );
		console.table( diff );
		const allKeys = union(
			Object.keys( value ),
			Object.keys( previous?.current )
		);
		allKeys.forEach( ( key ) => {
			const oldValue = previous?.current?.[ key ];
			const newValue = value?.[ key ];
			const reason = isEqual( oldValue, newValue )
				? 'new object or array'
				: 'has changes';
			if ( oldValue !== newValue ) {
				console.log( `${ key } - reason: ${ reason }` );
				console.table( [ oldValue, newValue ] );
			}
		} );
		console.groupEnd();
		previous.current = value;
	}
}

Then in the render of the component:

useLogIfChanged( 'layout', layout ); //In this example I'm looking into why the layout prop changed

And we’ll see in console:

On React Rendering

To make sense of our profiling results, let pause and find some resources that let us understand what React is doing. How React works may change, so make sure to research after any major updates. Here’s what I came up with:

A core principle of React is that it’s a value UI, not a workaround for a slow DOM. React will not magically make updating a lot of things on a page fast. (Note that React 18 may have better abstractions for this type of optimization work).

In practice, React currently uses a process called Reconciliation to determine what changed in its internal representation and what to update in the DOM. Doing this correctly is slow so React makes two assumptions with their heuristic algorithm when generating DOM output:

  1. Two elements of different types will produce different trees.
  2. The developer can hint at which child elements may be stable across different renders with a key prop.

A more practical takeaway knowing this while performance profiling is:

One’s mental model of how React works is likely incorrect.

By default, rendering a component will cause all child components to be recursively rendered. This is true even even if a child’s props don’t change! Check out a great visualization of this in React always Rerenders by Alex Sidorenco.

Another point to keep in mind that React uses shallow equality to determine if props have changed. This means that if we pass an array, object or function (like a handler function) with the same data, it will always trigger a render since the reference is changed. ohno

This generally won’t matter for child components that aren’t rendered much, but it can make a difference in parent components that have many children.

<Foo onChange={ () => {} /> // a new onChange prop each time!

<Bar layout={ [ 'default' ] } /> // a new layout prop each time!

<Baz options={ { 'default' } } /> // a new options object each time!

There are a few tools we can use to avoid re-renders, namely useCallback, useMemo, and React.memo in the right situations. However, it’s also easy to apply these incorrectly and have it make no impact or trigger bugs since the component isn’t updating as often as they should. When using these tools make sure we can measure the effects of our change!

Note that React internal implementation details may change too, so we also need to be careful not to over optimize on how React works currently.

For further reading see also FAQ Internals and Reconciliation. A Mostly Complete Guide to React Render Behavior and React always renders.

So far we have:

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • The scenario is measured by existing performance tests ❌
  • Manually profiling shows that focusing on a block causes a slow ListView render.
  • Researched how React works to better understand profiling results. ✅

Make a Hypothesis

With our profiling tools in place and a basic understanding of how React works, we can then make a few hypotheses and test them out. For right now, our hypothesis is: “Focus is slow because too many things are re-rendering in ListView.”

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • The scenario is measured by existing performance tests ❌
  • Manually profiling shows that focusing on a block causes a slow ListView render. 🤔
  • Researched on how React works to better understand profiling results. ✅
  • Hypothesis: Focus is slow because too many things are re-rendering in ListView.

Other hypotheses might look like X is slow because:

  • We make too many requests
  • The requests are too big
  • The amount of data we’re working with is too large
  • We’re caching but not using the cache / we’re caching the wrong thing / the cache is never valid
  • Our algorithm is something bad like O(n3) on a large list when it can be improved
  • We are taking up too many cycles when working with expensive tasks and need to break up work to keep things user input responsive. (The overall task will then take longer to complete, but we shouldn’t see any long running script warnings or click/typing jank).
  • and so on…

Let’s Measure Things!

While starting with manual performance profiling is a good way to get ideas, it’s much better if we can consistently measure our performance changes.

In the WordPress block editor, we have some automated performance tests that run. Some of the test cases include loading a large post and measuring things like typing and focus speed.

For this issue, I made sure to update the WordPress block editor performance tests to run with ListView open. Changes look a bit like this for folks working more closely in WordPress block editor. For other problems we might need to create a new performance suite to time our actions that we’d like to improve.

In this instance, even though the dev loop is around 50 minutes or so to get results, it’s a great way to test our hypotheses without relying on reviewers to be very proficient with performance profiling and interpretation.

Always make sure to measure performance results when using useMemo, useCallback or React.memo!

Slowing down to add performance tests may save you time! On my first investigation attempt, I did try to memo some suspected component props but it had no effect! I only found this out by adding and waiting for the performance results. Without measuring, we might have landed some code that makes things more complex and doesn’t speed anything up!

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • The scenario is measured by existing performance tests
  • Researched on how React works to better understand profiling results. ✅
  • Hypothesis: Focus is slow because too many things are re-rendering in list view
  • The scenario is measured by our updated performance tests ✅

Refine our Hypothesis

Unless we’re lucky (or very experienced on a problem we’ve seen before) refining a hypothesis or testing new ideas will often be the most time consuming part. Work here basically boils down to:

  • Thinking of a new idea or modifying an existing hypothesis
  • Testing the idea by measuring performance results before and after a potential fix or test to rule out the hypothesis.
  • Repeat, if the idea doesn’t give us the results we’re expecting.
Creating and testing hypotheses is the hardest part!

It’s often at this stage, where we might get the most frustrated or stumped. Just like with Debugging Hard things, try a few techniques to unstick ourselves.

  • Assemble our known clues – Keep a list of what you done! It’s probably more than you think. Note what you know and what you’ve ruled out. Revisit your clues often!
  • Surface for air. Take a break and write out your findings so far. Explaining a problem to someone else can also work too.
  • Understand a system more deeply – Research a layer you don’t know enough about. For example, understanding the tools that are available to you or gaining a deeper understanding of how something works like React rendering.
  • Reduce the problem – If you can toggle the performance behavior, try ruling out what it can’t be to help narrow it down. To save yourself time, you can do this in pretty broad strokes. Like in client performance, try commenting out entire components. If the performance issue is still there, we can rule out those component as being a hotspot.

In our example, what I did next was review what I had done so far, and go through profiling results more carefully. For other performance issues it may take much more trial and error.

Looking at the React profiling results, one common pattern that emerges is that hooks update because selectedClientIds update. That array is a list of the currently selected blocks, so this makes sense… but why is it so slow?

From our React research we know that rendering a component will cause all child components to be recursively rendered. So the ListView component is rendering itself and all of its children on each block focus! When there are many blocks it’s slow.

I thought of two ways to fix this:

  • Simplify the component and move querying of selectedClientIds to the child components (#35706)
  • Memo the ListView child component, so it costs less to re-render ListView. (#36063)

Taking care to demonstrate that there was a slow focus call before and after an improved focus call after, I actually ended up implementing both solutions. My first attempt that moved where selectedClientIds was queried was too fragile. Almost immediately, another feature PR needed that information on the parent ListView component again! Remember that there may be multiple ways of handling a performance issue. Be flexible and be aware of your project needs!

  • Confirmed that the focus performance problem is specific and concrete ✅
  • The problem can be reproduced by me and others consistently ✅
  • Researched on how React works to better understand profiling results. ✅
  • Hypothesis: Focus is slow because too many things are re-rendering in list view ✅
  • The scenario is measured by our updated performance tests ✅
  • We verified a hypothesis and have confirmed that a fix has measurable results ✅

Summary

So putting this all together my performance profiling loop is:

  1. Picking a specific measurable scenario to improve.
  2. Verifying that we can consistently reproduce the scenario.
  3. Profiling to find top slow areas.
  4. Making a hypothesis and test it.
  5. Repeat steps 3 and 4 until we are happy with performance gains. This might take a lot of trial and error before we get our ah-ha moment!

If we get stuck, we can use the same techniques as we do when Debugging Hard Things:

  • Put together a clues list of everything we’ve done so far and revisit it often!
  • Take a break and write out your findings so far.
  • Research what tools are available to help investigate the problem or learn more deeply about a layer you might not know about. For example React Rendering when looking at client performance issues.
  • Reduce the problem so we can narrow down the suspected area

#code

Debugging Hard Things: Safari Edition

Some problems can be hard for us to debug because we think it’s too difficult to understand. Signs that this may be happening is if we start making wild guesses, we try the same thing over and over again, or copy and paste a solution we don’t understand in the hopes that it will go away.

Hint: Adding more zeros usually won’t help…

I’ve seen this in practice from many smart and capable folks (including myself!) with concepts like z-index (like did you know there are multiple stacking contexts, not a single global one?), CSS specificity (it really is just counting!), spotting memory leaks, puzzling through concurrency issues, or trying to work around browser bugs.

So what can we do instead, when we notice something that’s hard for us to debug? Plenty!

  • Take a pause with the beginner’s mindset. It’s okay to not know how something works. We can take some time to learn some of the basics to help us move forward.
  • Remember that computers aren’t magic! 🪄 There’s a reason why this is happening. If we’re having trouble figuring it out, it might be for a reason that’s on a layer that we’re not familiar with. (browser, compiler, DB, OS, API, network, a bad physical cable, and so on).
  • Know that we can break big problems into smaller ones and work systematically to rule out or narrow down theories.

So that all sounds great, but how do we apply that?

Let’s take a closer look at what this process can look like in practice. The following is a real example of a browser bug I hunted down and some techniques I have found helpful.

A Real Example: Flickering Elements in Safari

One issue that piqued my interest while working full time on the WordPress Block Editor was some puzzling behavior in Safari when scrolling post content in the WordPress block editor.

The text was flickering on image captions and there was a black flashing when scrolling quickly.

Cursory searching said that most simply promoted more elements to their own compositing layer (more on this below) via some CSS like transform: translate3D(0,0,0); and called it a day.

While it’s pretty tempting to copy paste a CSS rule like that, what made me pause on accepting a PR that did just that is:

  1. We don’t understand why this was happening.
  2. We don’t understand why this maybe fixed it.
  3. We don’t understand what the consequences of doing so would be.

With that in mind, I dug in to try and provide an alternative solution. Here’s where I started:

Reproduce the Issue

A great first step to start when looking at a bug is testing to see if we can reproduce it.

Reproducing a bug lets us iterate on our theories and potential fixes in an ideally speedy test-a-fix and see-if-the-bug-is-still-there loop. If we can’t reproduce an issue, it doesn’t mean the problem doesn’t exist. It’ll just be much more difficult to iterate on. When things are not reproducible, sometimes we end up needing to test ideas by chatting with those who can reproduce the issue, or bulletproof and verify if an error or issue goes away with production instrumentation 😭.

Observe and Ask

In this Safari example, it was straightforward to reproduce. The issue already had a few videos attached to the issue, so I could verify which editors were being used by 🔍 looking at the UX elements and I could spot types of block content being used in the post as a starting point (paragraphs, cover block, gallery block and more). Creating a quick test post and scrolling in Safari confirmed that this was an issue.

If there’s not enough information in a bug report, we can ask the reporter for more information. Screenshots or full videos can help a lot too when folks don’t understand how to describe a behavior or technical terms of what’s actually happening.

Reduce What’s Needed to Reproduce

A bug report might have hidden assumptions or ideas on why something is happening. These assumptions aren’t always correct, so it helps to confirm or rule these out ourselves. One of the first things I did was also verify that this wasn’t showing any visual issues in FF or Chrome with the same content.

If we can reduce what’s needed to reproduce an issue, this can also help speed up our testing loop. With the flickering elements, I narrowed down post content to contain a single cover block and gallery.

Great so we can reproduce the issue! What’s next?

Assemble Our Known Clues

Like reading a murder mystery, it can help to keep our known facts or hints assembled together. It makes it easy to revisit when thinking of new ideas to investigate or if we need to go back and challenge our assumptions.

Initial Clues List

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint?
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • From quick internet searches: possibly related to layer compositing issues?

Pick a Clue to Investigate or Challenge

Sidenote: sometimes we can use past experience (intuition) to skip through some of these discovery steps. I personally have some experience working on some reasonably complex canvas apps, so compositing was something that came to top of mind when looking at the symptoms of this bug. For the purposes of this section let’s assume that we have zero experience to work with.

Okay so searching on the internet for “Safari flickering scroll” gives us an answer of something like add this magic CSS rule that does nothing: transform: translate3D(0,0,0); (move this element nowhere in 3D space) or something like backface-visibility: hidden (toggling this value is invisible in 2D space) with zero explanation.

Very suspicious, right?

At this point, it might be easy to give up and think “This is weird! Just paste that answer and move on with our lives!” but let’s remain curious and dig deeper. It’s okay if we’re never encountered a browser bug like this before or never had to dig more deeply into understanding browser rendering.

Be persistent and keep searching (or asking folks) and refine our questions!

  • Adding a follow up search like “transform: translate3D(0,0,0);” we can see that we’re trying to get the browser to do something around hardware acceleration.
  • Another search on hardware acceleration hints at a rendering process called compositing.

Bingo! There’s something new to learn or refresh our memories about! Let’s look at browser compositing.

A great way of gaining a basic understanding is finding multiple (hopefully reputable resources), read it, synthesize it and try to explain it again to someone else.

Here’s are the posts I used to try and summarize the next section:

Let’s give that explanation part a try!

Understand a System More Deeply: Compositing

Compositing is one of the last steps that a browser takes when turning a web page into pixels on your screen. It’s also an optimization over a more naive implementation.

Very broadly a modern browser renderer process handles:

  • Parsing: turning an HTML string into a Document Object Model (DOM). Loading external resources (images, styles, javascript), and loading, parsing and executing any JS.
  • Style: computing the style for each DOM node. (Which CSS rule won?)
  • Layout: calculating where to draw nodes and how big they should be.
  • Paint Order: what order should we paint elements? Think of how we might paint a real oil painting where we have some background mountains, a person, and a dog as our main subject. One method would be to paint back-to-front, drawing background elements first. Mountains, then the person, then the dog.
  • Paint and Compositing: Determine how to group elements into layers, paint each layer to fill with pixels, and then draw or put together each layer in the right order for a final image. Let’s go over this in more detail below.

What Is Compositing?

Animation of compositing process from https://developers.google.com/web/updates/2018/09/inside-browser-part3#what_is_compositing

While we might naively fill in pixels on our screen by painting each element in our viewport in paint order, this is slow. What if we break up parts of the page into their own layers that don’t change as much? Using our painting analogy, we might make a layer for the mountains, one for the person, and a last one for the dog. Like in cel animation, after we paint each layer, we can reposition each layer independently without needing to repaint the entire scene. For browsers this is very useful in scenarios like smooth animation or scrolling.

Determining what should be in a layer is non-trivial. These are internal implementation details and may vary by browser and change over time, but roughly a browser will create a new layer when it has:

  • 3D or perspective transform CSS properties
  • <iframe>, <canvas> or <video> elements
  • CSS animations and accelerated CSS filters
  • It has a descendant that is a compositing layer
  • It has a sibling with a lower z-index which has a compositing layer (in other words the layer overlaps a composited layer and should be rendered on top of it)

Layers can also get pretty large too, which can waste resources if the browser viewport only intersects a small part of it. We can optimize for this by subdividing a layer in a process called tiling. Going back to our painting analogy, think of how we might portion out squares of a large wall mural, in some prioritized order, for multiple artists to draw.

In the browser, determining what layers to create and how to put it back together again is usually split out to a compositor thread, which may in turn also create child threads to give small pieces of work to the Graphics Processing Unit (GPU). The GPU is great at small tasks like painting pixels for polygons, hence the term hardware acceleration that gets thrown around.

On Performance

Another way of thinking about this is that each compositing layer acts as a pixel cache. This is of interest to us as web developers because some types of updates to a web page can skip parts of the expensive rendering process.

From most to least expensive:

  • Layout Change: If we make an element bigger, smaller, or change its position on the page we can’t skip any steps of the rendering process. (Using cel animation as an analogy, think of needing to throw out all of our existing cels and needing to ink, color and reposition them.)
  • Paint Change: If we update a paint property like background, or color, we can skip layout. (Using cel animation as an analogy, we can repaint the existing cels, then reposition them).
  • Compositor change: If we only update a compositor supported property: transform or opacity. We can skip layout and paint. When done correctly we can see very smooth animations and scrolls. (Using cel animation as an analogy, no need to ink or repaint cels, we can simply reposition the layers for a different final image).

Why can’t we make everything a layer?

Have you tried promoting another layer? Maybe all of them? Is this perhaps a bad idea?

We can’t make everything a layer since the tradeoff is memory use and overhead for managing each of those layers! Done haphazardly, we can make our webpage much slower, or even crash! Like with most things we should profile to make sure layers make sense and are kept in check.

Revisit Our Browser Tools

After we understand a system more deeply, let’s check to see if browser have any tools to help track this down! It’s always great to double check what debugging tools are available to us since it makes investigation work go by much more quickly.

Thankfully, Chrome and Safari do have a layers tab in devtools which list these layers, their memory use, and why it created a layer.

Using the Layers Tool

So right out of the box, we can see a few suspicious things:

  1. We have a number of layers, some of them are very big!
  2. Some layers are caused by some position value, like “position: fixed”
  3. Others are caused by “–webkit-overflow-scrolling:touch”

Let’s update the clues list

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint?
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”

Refine the Search Using New Information

Using the information taken from inspecting the layers, one thing that called out to me was --webkit-overflow-scrolling since this issue only triggered on scroll (that we know about).

What can we do with that? Well, maybe let’s try toggling the value!

And so here we try to override this in dev tools, but with an unhappy surprise! It’s an unsupported property! But somehow a compositing reason? How rude! 

Well how about we change overflow on .interface-interface-skeleton__content?

https://github.com/WordPress/gutenberg/pull/32637

This works to get rid of the glitches, but it breaks the sidebars. We can’t go with that approach of course, but we now have more information!

Let’s update the clues list

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint?
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”
    • –webkit-overflow-scrolling:touch can no longer be set or unset. This is added automatically when overflow is set to scroll or auto. In other words, any scroller now uses compositing with hardware acceleration and we can’t turn it off.

Hidden Assumption

Sometimes we have assumptions on our clues list that are incorrect and can even limit our thinking sometimes. Let’s take a closer look at this one:

The black flashing is from a bad browser paint?

If true, this would almost 💯 be a browser bug to isolate and ideally be filed as a bug.

This wasn’t the case! While debugging in the elements pane, I stumbled upon the fact that it was coming from a .edit-post-layout .interface-interface-skeleton__content parent element, and we could make it any color we pleased, like pink! The gray overlay was intended to be used to frame the tablet/mobile and template previews.

This wasn’t a graphics glitch, but possibly an incorrect ordering or compositing problem on scroll.

Let’s update the clues list

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint? The background color is “bleeding” from .edit-post-layout .interface-interface-skeleton__content
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”
    • –webkit-overflow-scrolling:touch can no longer be set or unset. This is added automatically when overflow is set to scroll or auto. In other words, any scroller now uses compositing with hardware acceleration and we can’t turn it off.

Reduce the Problem

There’s a lot going on in the WordPress Block Editor. To help isolate the noise, two approaches can work in reducing a problem. We can start turning off larger pieces of logic in the app in exploratory PRs OR create a new base case mimicking conditions from scratch.

I opted to try and create a simple HTML/CSS test case, since I suspected it was still part of a browser bug. It’d be much easier to test our guesses in simpler markup and we’d need a simple test case to use for browser bug reporting in WebKit anyway.

My first attempt I came up with was this. I tried to pick out what I thought were the most important parts of the skeleton interface, along with what I hoped was enough test content to trigger the scrolling glitch.

I had partial success. I could trigger this on my large resolution monitor (and see it stop display the behavior when I set font-size back down to something more reasonable). Others however still couldn’t consistently reproduce.

Spot the Difference

Another game I like to play when debugging is spot the difference, where we go piece by piece and make sure we note where a difference appears in one environment or another. This work can be a bit tedious and sometimes requires turning off your brain, similar to going through git bisect to test which commit caused a failure in production. Once we spot a difference, we can then dig in and question why that is.

So, as I was refining the test case to try and make it more reproducible for more folks, I noticed something. Do you see it?

One of the compositing layers was much bigger in size in WordPress Block Editor than in the test case! In the Block Editor one layer was as tall as all content in that pane! Meanwhile my test case shows a layer that is the size of the current browser viewport.

Let’s update the clues list

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint? The background color is “bleeding” from .edit-post-layout .interface-interface-skeleton__content
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big. Our initial test case does not have such large layers, what could be causing it?
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”
    • –webkit-overflow-scrolling:touch can no longer be set or unset. This is added automatically when overflow is set to scroll or auto. In other words, any scroller now uses compositing with hardware acceleration and we can’t turn it off.

Detailed Work

With that new clue in hand, I tried to make a much more accurate base case. I attempted to fully mirror the skeleton interface. It was a lot of divs 😭.

My second basecase attempt was this and from the methodical work a surprising item popped out. A simple div was the cause of the huge layer:

<div tabindex="0" style="position: fixed;"></div>

See how removing it gives us a much more reasonable layer size? With the scrollable browser pane at ~1584px x 588px with the test case, there was also around a 50MB difference in memory usage.

Neat! I also opened a debug PR to see what would happen. In the Block Editor we insert this div in the content to aid in focus related issues while scrolling. When we don’t allow it to insert we can see that it too also gives us a more reasonable layer size.

We have mixed results: I wasn’t able to recreate the text flickering issue anymore, but I could still see the background bleed on scroll in some cases.

And for no apparent reason, doing so also triggered a new fun glitch where the background color from .edit-post-layout .interface-interface-skeleton__content also “bleeds” into the scrollbar element when selecting an image.

Let’s update the clues list

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint? The background color is “bleeding” from .edit-post-layout .interface-interface-skeleton__content
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big. Our initial test case does not have such large layers, what could be causing it? Adding a div in the scrollable content with position: fixed, will create a layer of height that equals of the scrollable content height.
      • Removing the fixed div, causes a different scrollbar glitch to appear. Removing the fixed div did not fix the background scroll flashing. Do we have more than one problem?
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”
    • –webkit-overflow-scrolling:touch can no longer be set or unset. This is added automatically when overflow is set to scroll or auto. In other words, any scroller now uses compositing with hardware acceleration and we can’t turn it off.

Surfacing for Air

It is easy and normal for one to get frustrated or even stuck on debugging hard things. When that happens, pause, and surface for air. One great thing to do is recap what we’ve done so far. Reminding ourselves of what we’ve uncovered so far in our clues list, summarizing the pieces, and getting input on what to try next and test assumptions works in wonderful ways to unstick us!

So after writing an internal post at Automattic (which contained content like this post, just stopping at this part) I circled back and tried to decompose the problem. I was pretty sure we were seeing multiple issues instead of just one.

Workarounds

At this point, I was pretty sure that we were looking at at least one browser bug, perhaps several. The symptoms here were interesting enough to distill and report back to WebKit, but knowing if I could isolate a bug and see the fix upstream in a reasonable amount of time is typically out of my control.

As web developers, if functionality is important, we often need to pragmatically find a workaround where we typically re-implement the same functionality but avoid the browser bug, even if we’ve isolated the bug or have found a fix.

Background Bleeding on Scroll

The first workaround I focused on was the background “bleeding” on scroll. While, I had a good lead on the flickering text issue, at the time I didn’t know what functionality I broke by deleting the position:fixed divs and what it was intended to be used for. I asked for help in tracking down that why and moved back with fresh eyes on the background-flashing issue.

I tried several things: like changing which components were toggling the overlay for tablet and mobile previews. Everything I tried wasn’t quite right, and I still saw that color bleed. Taking a break I looked at my clues list again.

Going back to my clues list:

  • Glitches only display while scrolling in Safari?
  • The black flashing is from a bad browser paint? The background color is “bleeding” from .edit-post-layout .interface-interface-skeleton__content
  • Maybe related to images (since it’s easier to reproduce with that content)?
  • Possibly related to layer compositing issues?
    • We have many layers. Some of them are big. Our initial test case does not have such large layers, what could be causing it? Adding a div in the scrollable content with position: fixed, will create a layer of height that equals of the scrollable content height.
      • Removing the fixed div, causes a different scrollbar glitch to appear. Removing the fixed div did not fix the background scroll flashing. Do we have more than one problem?
    • Some layers are caused by some position value, like “position: fixed” or “–webkit-overflow-scrolling:touch”
    • –webkit-overflow-scrolling:touch can no longer be set or unset. This is added automatically when overflow is set to scroll or auto. In other words, any scroller now uses compositing with hardware acceleration and we can’t turn it off.

Well it does look like the common thread here is the compositing layer. In this particular case, the background-color was also being set on a div that was a compositing layer.

I was focusing so much on business logic, what would happen if we changed how the CSS rules were applied? If there was a browser bug, what if we moved the background-color rule to a div that wasn’t a compositing layer?

https://github.com/WordPress/gutenberg/pull/32747

Moving the background-color to a div that was not a compositing layer did the trick! The workaround PR was super simple in retrospect but took quite a bit of 🔍️ investigation to get there.

Back to Text Flickering

With the background bleeding fix in hand I went ahead and rebased my debug PR with the background fixes. As a bonus, that scrollbar glitch was also fixed!

We already knew that removing the fixed divs would fix the text-flickering and doing so avoided creating a very large compositing layer on the block list wrapper.

Now it was a straightforward matter of understanding why the fixed divs were added and providing an alternative implementation to retain functionality. Thanks to others we learned that the divs were used to help prevent scrolling on tab. The workaround needed here was to remove these fixed divs and re-implement the scrolling on tab behavior in a different way:

https://github.com/WordPress/gutenberg/pull/32824

Isolate and Report

At this point all issues were resolved in Gutenberg with the workarounds that landed.

With limited time, it’d be pretty common to call it quits once we find a workaround for a browser bug. I was really curious about the root causes here too, so I wanted to isolate good test cases for WebKit bug reporting and hopefully get this fixed for others.

Doing this work took about as much time for me as it took to fix the problems in the WordPress Block Editor. I used all the techniques I noted before: reduce the problem, spot the difference, and lots of tedious detailed work.

One of the goals of isolation is to find the smallest test case possible that reproduces the behavior. If you have a good instinct on where the problem might lie, it might be faster to start from scratch and reproduce the conditions agnostic to your app. If not, it can make sense to start with where you can reproduce an issue, then try to chisel away as much as you can.

I also asked others to verify if they could reproduce a test case before reporting. Sometimes different operating system settings, or different hardware can be needed to trigger an issue. Aiming for a simple and easy to reproduce test case will usually lead to faster fixes in any project.

The Solutions

Background Flashing

We fixed this in the Block Editor by moving a background color to a div that was not a compositing layer.

https://github.com/WordPress/gutenberg/pull/32747

https://bugs.webkit.org/show_bug.cgi?id=227532

To isolate the problem, one missing ingredient was it required very quick scrolling (usually from a mousewheel). Webkit maintainers also noted that the artifacts here are from tiled layer flashing. (When we split up a large layer into smaller pieces to paint).

It should look like this:

Text Flickering

This was fixed in the Block Editor by removing two position:fixed divs, so we avoided creating a compositing layer that was very large.

https://github.com/WordPress/gutenberg/pull/32824

To isolate a test case, I had to brute force this one, by turning off as much as I could in the editor, then perform a binary search on the styles to see what kept the glitch or not. The overall test case is a weird combination of needing flex styles, two fixed divs, an iframe and some extra z-index stacking contexts.

https://bugs.webkit.org/show_bug.cgi?id=227705

After submitting the issue, WebKit maintainers quickly used the technique to reduce what was needed to reproduce (https://bug-227705-attachments.webkit.org/attachment.cgi?id=432961 ✨). It turns out this was a duplicate bug of this regression which had a patch, and there was an amazing turn around of about a day to get that committed.

See also this comment from Simon Fraser on what was happening:

Compositing backing sharing logic exists to reduce the count of composited layers,
allowing layers that would otherwise get composited to paint into the backing store
of some containing block ancestor (usually a scroller). A "backing sharing sequence"
is a backing-provider layer, and a set of layers contiguous in z-order that can
paint into that shared backing. If a layer becomes composited, it must interrupt
the sequence (because layers later in z-order must render on top).

The bug occurred when a layer became composited between the calls to 
BackingSharingState::updateBeforeDescendantTraversal() and BackingSharingState::updateAfterDescendantTraversal(),
for example because of an indirect reason like overflow positioning...

Bonus: Safari Scrollbar Wrong Color

I happened to stumble on this one while debugging a black scrollbar on a test commit, but thought it would be interesting to isolate. This was fast for me to find through luck (half-a-day), since I’m inherently suspicious of extra z-index contexts.

https://bugs.webkit.org/show_bug.cgi?id=227545

Somehow the overflow controls container is getting behind the scroller layer. Triggered by the negative z-index child

Simon Fraser https://bugs.webkit.org/show_bug.cgi?id=227545#c3

There’s already a committed patch in WebKit for this one! 🎉

Summary

So when debugging hard problems, don’t despair! Remember that it’s okay to not know how things work. If we notice a gap in our knowledge, we can take some time to learn how things work.

If we work systematically and keep track of what we know and don’t know so far, we can work toward finding a fix, or help narrow down what the problem might be. Techniques like picking a clue to challenge or investigate, reduce the problem, spot the difference, detailed work and surfacing for air can help move an issue forward.

When time allows, isolating an issue and reporting upstream can both help others and deepen our own understanding of what’s actually happening in a layer we are unfamiliar with.

I’m happily employed by Automattic. If working on these kinds of problems excite you, check out our open jobs page!

#code

How to Unstick a Pull Request

Sometimes pull requests can get stuck during code review. In many cases, it’s not because the changes were unneeded, but because the conversation just appears to… well, stop.

I’ll walk through five common problems pull requests get trapped in and what you can do as a reviewer to help move things along.

The Forgotten Pull Request

Photo by cottonbro on Pexels.com

Symptoms: There might have been some conversation on the pull request but now it’s just dead silence. Reading back you see that the last comment was from six months ago without clear next steps. The pull request is still open, waiting for someone, anyone, to help nudge it.

Try: Ask the author if the pull request is still needed and if they’re interested in working on the problem. What happens next is either the pull request is swiftly closed OR the author tidies it up for a fresh set of eyes and a new review. ✨

What is the problem? What does this do?

Symptoms: The pull request summary is a terse sentence. In other cases, the pull request doesn’t summarize and has too much context. For whatever the reason, as a PR reviewer you’re not quite sure why the PR exists or how to test it.

Try: There’s no need to be a detective 🕵️‍♀️. If something is confusing, go ahead and ask the author! Being clear on why a PR is needed and what is does, is not only good for the reviewer, but it helps provide context to future maintainers on why certain decisions were made.

Ideally we want to know:

  • What problem(s) the PR is addressing.
  • What does the PR change?
  • How can we manually test it?
  • What type of feedback is the PR author looking for?
  • Are there any trade-offs to the solution we should be aware of?

PR Checks are failing

Symptoms: Newer contributors to a project might not realize that their pull request has failing checks. Maybe the linter is not happy with some whitespace, or a test is broken and the author isn’t sure on how to run the suite.

Try: Gently remind folks in a comment that checks have failed. If you have the time, point to documentation on how to run the tests and other contribution guidelines. It can also be helpful to troubleshoot environment issues with them or give them additional hints on how to clear up the issues. Using our best judgment, we can sometimes accept the PR in a less than perfect state and help fix the issues by asking and making changes on the branch directly, or in a follow up PR.

No Clear Next Steps

Photo by Leah Kelley on Pexels.com

Symptoms: We see a PR where other participants have left some previous reviews. It isn’t clear what needs to happen next. The conversation might even still be active, but it’s going on and on and on and on… to the point where GitHub thinks it’s a good idea to start hiding part of the timeline:

Try: Ask the active participants what needs to happen for the PR to either merge or be closed. It might be as simple as someone taking the time to summarize what was discussed and what needs to happen next, for example: fix tests, address design feedback, and update documentation. At other times, a PR working through multiple concerns might need to be split out: like starting a new PR exploring expanding an API, or proposing a framework change in a different medium like chat or long-form writing.

Additional Reviews Needed

Symptoms: You’re not confidant of approving the PR on your own. The changes affect multiple areas. Maybe the PR needs other expertise like design or security feedback that you’re unable to provide. You’ve done a great job already by knowing what you know and what you don’t know.

Try: Leave a review clearly stating what you tested, what feedback you have, and what type of feedback you think the PR needs to land. Manual testing and other partial reviews are still a great help to other reviewers. If you know who’d be great at unblocking the review, ping them in a comment with context. If you’re not sure, leave a note on what type of reviewer expertise is needed and if you have time, help the author by asking other contributors on who’d be a good fit to review.

Give it a try!

If you notice a stuck pull request that fits one of the patterns here, try unsticking it! You don’t need to be the project expert to help move things along. Simple actions like asking good questions, manually testing, or asking for additional review help can make all the difference.

#code #process

Remote Communication

Sustainable remote communication requires being intentional and mindful of yours and other’s time.

In distributed companies, we can’t bump into each other in hallways, have casual coffee chats, or even be able to offer a proactive helping hand if a teammate looks confused. Every conversation: be it in video, text chat or private message needs to be initiated by someone with intention.

What we trade for potential increased focus and flexibility, we lose in in-person spontaneity. This isn’t all good or bad, just different. Being mindful of this while we pick the communication tools at our disposal is key to communicating clearly and efficiently.

If you find that you’re constantly task switching, are having endless back-to-back meetings, or are spending all of your time in text chat, this post offers a few things to experiment with and try. 

It’s 2020, so put on your own oxygen mask first 

Before we dive in, I want to acknowledge that there is a pandemic raging, climate change making itself ever present, and all the while, we’re still expected to work. Maybe you’re also struggling with additional caregiver responsibilities, or the social isolation of not being able to go outside and see physically visit people without calculating the risks. 

If this remote stuff feels impossible, know that it isn’t a normal time to be working remotely. Long time remote-first workers are feeling the strain too. Please take care of yourself as best as you can and if you’re a manager reading this post, please encourage folks to take some time off and model that behavior by taking that time off yourself.

Types of Communication Mediums

Long-Form Writing

Long-form writing is the binding glue for most well-functioning remote companies. If you’re ever unsure of a medium to pick it’s hard to go wrong by starting with a long-form post.

A post written by one author can start a conversation in the company with many people and teams. The medium is also flexible enough to handle everything from routine meeting notes and team project questions (working through technical, design or product questions) to a well-honed north-star vision that might guide company direction for the following year.

Benefits

  • Long-form writing forces the author to stop and take the time to frame their thoughts in a succinct and coherent way.
  • Participants don’t need to drop what they’re doing, right now, to focus on a problem that someone raises. They can instead reply when it’s the best time for them. A response within a business day or so, for example, is more than appropriate. This allows for more experimentation with personal schedules. Many find it more efficient to devote one part of their day to reading and responding to posts.
  • It allows folks who are more time-shifted to catch up with what’s going on. This can especially be helpful if you have colleagues in very distant timezones or are working flexible hours.
  • We can eliminate some meetings and better prepare for the remaining ones.
  • With search, we can find out why things have happened in the past without us being there or relying on imperfect human memory.

What type of problems would we pick long-form writing for? Really, just about everything: from a CEO describing company vision to an individual contributor getting into the weeds and needing to work out a solution with feedback from co-workers. 

Examples

  • A group post (written by many people) outlining the north star for company direction 5 years out and a specific roadmap for the quarter. The post asks readers for feedback with a given time deadline, e.g. open for the next 2 weeks.
  • A project lead summarizes the why, the challenges, risks and estimated time for a given project.
  • A designer asks for feedback from the team and stakeholders for the latest flow for their new feature. 
  • A developer outlines two implementation options for the feature and asks for feedback from both product and other developers on tradeoffs and a check in on if they understand requirements.
  • A lead writes up routine team meeting notes: highlighting the agenda, who attended, what was asked, what decisions were made, any follow up action items and who was responsible.

Tool requirements

Roughly, when picking out a specific tool to power internal discussions, it should have the following elements:

  • Posts can notify the right number of people – There should be some ability to tag individuals and larger groups like teams or divisions.
  • Searchable –  letting us find out why things have happened.
  • Editable – Mistakes can be fixed and we can go back to summarize conversations and outcomes at the top of the post, or in a summary comment. For example, if a post asks a question and a direction was picked, we should double-back to edit the post to point out what happened.

Internally at Automattic, we use a tool called P2 to power these discussions. Forum software or wiki tools might also work for you. 

Try this when:

  • You need to collaborate with one or more colleagues.
  • As a first step to see if a problem can be described and next steps can be taken asynchronously, without a meeting.
  • If no other communication medium is a good fit!

Documentation

Documentation is a specialized form of long-form writing where information remains relevant over time and is often maintained by many people.

Tool requirements

  • Searchable –  Information is easily discoverable by anyone in the company
  • Editable – by many people

Examples:

  • Company Culture Book
  • Onboarding documentation
  • Team Processes
  • On-Call Runbook

Try this when:

  • Knowledge is buried in someone’s head and only one person knows how to do something
  • Information is repeated, either through video chats, text chats, or in time-sensitive posts
  • A one-time post resonates with your co-workers more than expected

Workflow Specific Tools (Issues/PRs/Figma/etc)

As a company you may already be communicating asynchronously with very specific tools for a given workflow! For example, bug and feature requests may be written in issues, while developers may spend a lot of time pull requests and code reviews (aka asynchronous pair-programming).

Try this when:

  • Tools and workflows are already in place for your team or company

Try switching mediums when:

  • A discussion is going in circles. Some signs of this might look like needing to click to expand to see a full discussion, or seeing two participants rapidly commenting to each other. When this happens it may be the case that folks are accidentally talking past each other. Text chat or video calls/screensharing can convey information more quickly between two individuals and may help clear up any misunderstandings.
  • We need to step back to better frame a problem at a higher level. For example, instead of going back and forth on implementation details in code, we might need to ask if it’s worth building said feature at all. To reach the right folks, writing in long-form is usually a good call.

If people do choose to switch mediums, please remember to follow up with a summary update or comment summarizing what was decided. This helps others understand what was decided without needing to reach out to others synchronously.

Text Chat

Synchronous communication is a bit of a two edged sword. On the one hand it lets you talk with folks right now, which can help greatly when working through problems and building camaraderie on a team. It also means that we may leave out folks who are in different timezones or are working flex hours out of conversations.

For some, text chat can be very distracting and draining. Let’s go over some reasons why this might be so.

  • Are you allowed to mute chat for focus time, or after hours?
  • What is the expected response-time for a ping in a public channel or direct message? Talk with your lead to get a clear understanding! For example, since Automattic has no core working hours, on my team it’s okay to respond within a business day if folks are not online or have muted chat for focus time.
  • If you’re not on-call, how well are you silencing these notifications on personal devices after work?
  • Can private messages be taken to a more public channel instead? Too many private messages can feel like the equivalent of someone stopping by your desk and interrupting you. If the subject isn’t confidential and can be discussed in the open (think projects not feedback), consider moving the conversation to a more public area. This helps gives others visibility of what you are spending your time on. It also avoids ever growing group direct messages where some subset of folks have half of the conversation.
  • If someone asks for non-urgent help and you’re in the middle of something, can you schedule a time to chat in text or video later on the calendar?
  • Don’t force others to read text chat backscroll. If an interesting conversation has occurred, and we’d like to share what was talked about, a long form post summarizing this is much more appropriate than having someone else parse a long-winding conversation.

Try this when:

  • You need a bit more bandwidth to quickly discuss an idea with a few people.
  • Team/co-worker bonding! a.k.a all of the emojis and gifs! ✨

Video Chat/Screen Share

Video chats and screen sharing demand one’s immediate attention. I can juggle a few ongoing text chats, but I can manage only one video call at a time. In return for this demand on our attention, we can share a lot of information in a short period of time.

Try this when:

  • We’re building relationships! 1:1s, the weekly team call, watercooler chats and so on!
  • When we quickly need to build consensus or brainstorm next tasks in meetings.

Video calls, for whatever reason, tend to be more draining than meetings done in person. Except for 1:1s, aim to keep meetings small, short and on point. An hour for example tends to be the maximum limit of focus, no matter how much people would like to pay attention. Team process may need to be altered to help accommodate this additional cost. I’d recommend simplifying process as much as possible, and preferring async over sync when looking at communication options.

Summary

Regardless of what medium we pick, remember to communicate regularly and often! While working remotely, we often need to work twice as hard to make sure folks know what we’re accomplishing and when we need help.

Communicating in a sustainable way requires being intentional and mindful of yours and other’s time. If this ever feels draining, try switching mediums or start a discussion with your lead on experimenting further with process. It may make all the difference!

#process