Little update to series

Roughly three years ago I started writing a series of posts on the API. Little did I know how valuable this post would be to so many people within the WordPress community (and at my current employer Automattic)! Of course, as it is with all APIs, changes happen and while the content of the series is still applicable and works, it does need updating. So for the next couple months I’m going to take some time to update the series with all the new interfaces and changes that bring up to date.

Today, however, I completed the first update to the series and it was replacing a broken part of the demo apps referenced throughout. When I first wrote these posts, I utilized a service called to help facilitate simulating communicating with a backend for data. But for about a year, it has not been available which meant a good portion of the demo apps didn’t work. Although there are a number of other options they either required self-hosting, or had rate-limiting, or CORs issues. The latter two being problematic because the demo app is hosted on

I thought I might have to code my own solution until I came across This actually ended up being a really great option because it’s just an API mock service (with products!) which meant that the chances of it being rate-limited or unavailable in the future are slimmer. Perfect for the use-case.

Anyways, I updated the app examples to use this new service and the relevant posts in the series. Enjoy!

A tribute to my sister

On the evening of April 5th, 2022, my sister made her journey from this life to the next. My heart is heavy and full of grief. Still. Writing this has been cathartic. I’ve started and stopped many times over the past few months. Often weeping. It’s my attempt to honour her memory and ease the pain.

I remember when we were young children we disliked each other and fought a lot. I don’t really remember much about why. She was three years younger than me and I may have been jealous of the attention she got as the youngest, or maybe it was a symptom of all boys hating girls at that age. I think Jenny secretly delighted in getting her older brother into trouble and I can distinctly remember a few times when she cleverly included herself in escapades with my friends with the threat she’d arrange for some story that would get me grounded if I didn’t let her tag along.

Really though, these early years were probably the typical brother & sister rivalry and pale with how close we got as we got even a bit older.

I ended up being fairly protective of my younger sister. I remember when she started school and I gave her instructions to let me know if any kid bugged her and I’d take care of it. Truth be told, that sounds tougher than it actually was (me being one of the shortest kids in school) – in fact, Jenny held her own on the school ground and didn’t have any trouble at all making friends.

In our teen years, Jenny was my confidante. She was absolutely the best listener and whenever I had school/girl/friend troubles – Jenny would patiently tilt her head my way. Sometimes she wouldn’t necessarily say anything, she was just there.

Over the years we were fortunate to share a lot of the same interests. We both loved to read, we both grew up loving the outdoors, we both had a creative streak expressed through music and writing, and acting – hello elementary school Wizard of Oz, she played Dorothy and I was the munchkin king lol. Both Jenny and I took piano lessons, participated in music festivals and sang in choirs. Although, she definitely had a way better voice than me!

We both shared a strong faith. While we had formative years growing up in my parent’s church, it wasn’t until our teen years when we joined a youth group that we really grew deeper in our faith and relationship with Jesus. We developed a pretty incredible group of friends during those years that had a deep impact on our life. It was also during this time that we started playing music together, I on the piano, she singing.

We both loved sports. In fact, some of my most distinct memories are the times we’d be at the arena together – Jenny for figure skating lessons, and me for power skating or hockey practices. I’d go to her figure skating competitions, or ringette games, while she’d come to my hockey games. Then there was soccer in the summer.

I’ll never forget the call I got from my Mom and Dad one night from the Owen Sound hospital when they told me to come to meet them there. I had been out with friends and Jenny was at one of her figure skating lessons. A coach had noticed that one of Jen’s legs was really swollen and suggested they go to the hospital after to check it out. A scan at the hospital had revealed a mass in her leg and when I arrived my parents explained Jenny was being taken to Toronto Sick kids for more tests.

It was there, at 15 years of age, she was diagnosed with having a rare (for her age) Desmoid tumour. It was an aggressive variation and although non-cancerous, needed to be treated and operated on because she was at risk of losing her leg and other organs getting impacted. Thus began Jenny’s first encounter with tumours, this one lasting through two years of equally aggressive treatment including surgery and chemotherapy. The Chemo almost killed her but she was strong, survived, and was eventually declared tumour free.

Though her time in the hospital and treatment for the tumour negatively impacted her life, Jenny finished high school with great marks and attended the University of Guelph receiving her Bachelor of Applied Science (with honours major in Child Studies). Shortly after beginning University, the Desmoid tumour returned. The angst and grief of this diagnosis was all too real, especially when she was just at her last checkup to make sure things were okay. However, and fortunately, for the next round of treatment there was an experimental type of chemo she was able to take that had less aggressive side-effects and worked incredibly well at killing that tumour. After about a year, she was once again in remission.

After graduating, she started her 23-year career working with the Children’s Aid Society of Owen Sound and in her early years there continued her education receiving her Bachelors and then Masters in Social Work. While one might think this was mere academics, Jenny pursued this because it mattered to her that she was grounded well in what she did.

Jenny deeply cared about the work she was doing at the Children’s Aid. It was more than a job or a career, she felt she had a calling to not only protect and help the kids that came into their orbit but also work where and as much as possible to help the families impacted as well. It was hard, emotionally exhausting, and at times frustrating. One interesting thing that surfaced in the last year as Jenny battled her final bout with cancer, was that not only did she have an indelible impact on those families, and kids, but she also was a confidant, friend, mentor, and support to an incredible amount of her peers and colleagues.

That was Jenny. Genuinely and sincerely working to better the world around her and make everyone she interacted with feel valued and significant.

It was pretty cool that Jenny ended up marrying my best friend Chris. We even all bought and lived together in a house in Hanover when I was pastoring there. It was an incredible opportunity to pool our incomes together to break into owning a home and even more awesome for my first two kids to have “deedee” (Auntie) and “dohdoh” (Uncle) (as my daughter Karissa referred to them) around.

Due to the scars from the surgeries and treatments Jenny had with her tumours, she was told that it was very unlikely (and dangerous) for her to have children. Needless to say, Jenny defied that diagnosis with her and Chris bringing three incredible kids into the world – Aidan, Connor and Anna. These little miracles were the world to Jenny and Chris and she loved them deeply.

I hate using the past tense talking about my sister. Hate it. The biggest challenge of grief is when your present is now your past. When the person you love is both near and far. Familiarly near in memory, and yet so painfully far in presence.

In the intervening years, so much happened – as it usually does. Our families grew, I began a new career, and we each moved around to different houses. Though our time together decreased, we made the most of the time we had. So many memories were created in sharing birthday celebrations among our families, back and forth for various holidays during the years, and of course – our annual family trips to the cottage.

Going through the pictures now, remembering the stories attached to each one. Wishing we had one more big trip together.

Even though there were times we thought Jenny had seen the last of her tumours, variations of cancer kept surfacing over the years. She was especially susceptible to melanoma and had several bouts where she had to have cancerous growths removed. Even though there was this vague awareness that Jenny was high risk for cancer returning, it was so easy to think it would never be as serious as when she was a kid.

It would never get that bad.

Jenny herself lived life fully. She didn’t run away from challenges, but faced them head-on, strong in her faith, and often a source of strength and joy for those around her. That’s not to say she never wept or got frustrated or angry with those challenges. She let those moments come as they inevitably do. Incredibly though, she had this ability to pass through those moments stronger. Probably because she learned to embrace her reality, release her anxiety, and enjoy what she could.

Her joy was in seeing others’ joy.

While cancer is what eventually stole my sister’s body from this world. It didn’t take her life. Jenny’s legacy is the impact of her life on all of us.

Her faith fueled her joy, strength, and courage in everything that she did and lifted up everyone she encountered. A promise that this world is just a temporary stop-over, the best is yet to come.

Her perseverance, and investment in the things that mattered. Not worrying about the things that didn’t.

Her preparation lifted the burden of so many people who weren’t always prepared or even knew what to prepare for. Jenny often blazed a trail that lifted the capability and possibility of everyone around her.

Her love and attention meant you walked out of your moments with her feeling better.

I miss you, Jen.

Transferred to

With the successful transfer of ownership of Organize Series, and less time to manage my Digital Ocean droplet for my blog, I decided to transfer hosting of to Besides removing the need for me to maintain a server, this also allows me to experience the best work my colleagues do and experience all the latest that happens here.

As a bonus, the migration from self-hosted to (via Jetpack) went really nicely (and I completed the entire transfer in about an hour).

The only downside is it looks like I lost all the likes on old posts and lost the followers from my previous Jetpack connection. I didn’t have a lot of followers though so not a big deal – if the content ends up being useful enough that might pick back up.

One Year Later, a Reflection on becoming an Automattician

For those who follow me on twitter you may recall a post I tweeted just over a year ago:

My start date with Automattic was July 23, 2019.

I fully intended on writing a followup to that tweet, but then got sucked into all kinds of fun stuff with my new responsibilities and posted in my todo notes “Write a three month followup”. Three months came and went and that delayed post got pushed to a “Six month followup”. Of course 6 months came and went and then I finally adjusted it to a “year anniversary followup”. Here I am three days after the one year anniversary of me starting at Automattic and I think it’s time I write this post!

Some background

I’ve always had an interest in programming and working with computers. I remember Grade 6 was the first time I experienced what it was like to write code when I was part of a few students lucky enough to get to play with a Commodore PET that my school had purchased.

Over the years I would spend as much time as I could on computers but often my only exposure was what was available via school or at friends as my family couldn’t afford a computer. Then in Grade 10 my Dad picked up an old Apple IIe from his workplace and brought it home as his office switched to new IBMs and were getting rid of all their old obsolete machines. That thing was like gold to me! I remember reverse engineering Applesoft Basic programs to figure out how they were programmed and then trying to build my own programs from what I learned. So much fun!

I ended up going to school to study to be a Pastor, but even there I was always playing around with computers. By then I had scraped up some money from different jobs to buy myself a 386 (with monochrome monitor and 256MB hard drive!). From there I graduated to a 486 and color monitor, and BBSes were all the rage. Who remembers the screeching of a modem connecting over dialup?

It wasn’t long before I started learning html and playing around with the fun things you could build on Geocities. When I became a pastor, I was still fiddling around with building websites and building things online. From Geocities I graduated to using PHP and MySql to build things in PHPNuke! Oh that was so much fun! I started building a module for PHPNuke to make it easier to post sermon series I preached online and even put some forums online to interact with folks. And then I stumbled onto WordPress.

WordPress 1.5 was the first version I started using and it was so much easier to build things and tinker with WordPress than PHPNuke. At that point I started working on converting this series module I built in PHPNuke over to a WordPress plugin and Organize Series was published to the repository in the beginning of 2007.

I distinctly remember reading about Automattic and the team behind around this time and thinking to myself, “Wouldn’t it be fun to code full time and work for a company like Automattic”? At the time though I was still enjoying my day job and content learning to code and building things on the side during my free time.

Fast forward a few years to February 2012. I had started a business on the side doing freelancing for a wide variety of clients – building custom plugins and website solutions and it had grown sufficiently that I had to choose between continuing it and leaving my day job of Pastoring or significantly scaling back the growing side business and focus on being a Pastor. I couldn’t realistically keep doing both as being a Pastor was a really tough job and I was burning the candle at both ends between the two. I realized that while I didn’t regret any of my years pastoring, I loved and was passionate about building things with code and I decided to take the plunge and go full time coding (the spiritual side of this decision for me is a whole other story!).

A short while later I was very fortunate to land an open ended contract with Event Espresso and it turned into 7 great years of them being my main client and providing the majority of my work and income. I’m so grateful for the many opportunities Event Espresso gave me to learn and grow as a software engineer.

I learned so much about selling plugins online and Saas from Garth and Seth. I grew in my skills thanks to the awesome things I learned from my fellow developers at EE, Brent and Mike. I learned how to have patience and support customers via the excellent support Tony and Josh gave to users of EE.

I’m also grateful for the freedom I gained from working with EE to contribute to the WordPress project. EE had something called “ICE” (Innovate, Contribute to WordPress community, or Educate by learning new things) time which I utilized to be involved in WordPress core and then Gutenberg. It was a great supplement to the time I had already been giving WordPress personally over the years.

The turmoil…

One of the awesome things you get by freelancing and working on your own is the freedom you get from not having a boss. However, one of the not so awesome things is all the overhead that goes with that, including planning for vacations, dealing with things like healthcare and saving for retirement (not to mention the actual ever present need to keep the job pipeline full!). Fortunately, while I was on contract with Event Espresso, they were extremely generous with hours and there were some arrangements they agreed to that allowed for me to creatively store extra hours worked in advance so I could go on vacation without needing to work.

Still, after 7 years of this, I realized that despite how great it was working for EE and the things I was learning, and the great fun I had with the team, there were a few things that were creating a bit of a turmoil for me.

  • I was getting tired of billing by the hour and having to track my hours worked. I found myself measuring all my time by my hourly rate and it sucked.
  • Even though things were great with EE for taking time off and working around holidays etc, it still was lost paid hours if I had to take extra time away for some reason (unexpected sickness, family issues etc).
  • EE was a distributed team and everyone worked remotely. It definitely has it’s advantages but I had worked 7 years with the company and had only met two of the team in person in that whole time. I’d advocated for some sort of team meetup over the years, but it never materialized (*I understand there are costs involved). While not a major contributing factor to the turmoil, it was something on my mind (and I often thought about).

Outside of EE I was also really enjoying working on the Gutenberg project and the new stuff I was learning as a result of working on it.

One morning…

One day in early June last year, I woke up and as I started my day I found myself browsing the Automattic website and remembering how over 10 years ago I had wondered what it would be like to work for this company.

Of course in the meantime I had a chance to get to know three great people working on the Gutenberg project who also happened to work for Automattic. I decided to casually ask Riad, Andrew and Greg what it was like working for Automattic and what was something that they enjoyed about being employed there. All of them had really interesting answers and they also encouraged me to apply if I was interested (even offering to be a reference so I could be fast-tracked through the application process). I chatted with my wife and she had noticed the change in my demeanour over the past few months and so she encouraged me to go for it!

The application and hiring process

So on June 7th I fired off an email to Automattic with my application and resume (along with ccing Greg, Andrew and Riad as references). Even with fast-tracking, I wasn’t expecting to hear back from anyone until sometime the following week. However, about 20 minutes after submitting my application I received an email asking if I was available to do a slack chat that day! I was over the moon (and also super nervous).

The initial chat.

Automattic’s hiring process is done entirely over Slack and for the duration of the whole process I was added to a private slack channel (#hi-darren-ethier) where all the interviews and followup discussions happened. Overall I really liked this process as it made it a bit less formal and was not as nerve wracking as talking in person or over video would have been for me.

My initial chat was with someone from the hiring team and I vaguely remember it being mostly sharing a bit of my background and answering some questions about myself.

There was also some information shared about Automattic. In particular, something that was referenced at all stages of the interview process at some point or another was the Automattic Creed. As a sidenote, I really like that the company has taken the time to articulate what is important to them. One year in and I can definitively say this creed is not just a theoretical exercise. By and large, it’s embraced and lived in the company.

At the end of the chat they let me know that I was proceeding to the next stage which would be another chat with an engineer.

That day, I received a scheduling invite to pick a time to meet with the engineer assigned to me for the interview and there was a slot open that Sunday evening! Of course I grabbed it 🤗

Chat with engineer

Unfortunately, the slot on the Sunday with that engineer got cancelled because that person wasn’t able to make that time, so my interview was rescheduled for the following week (June 18).

The conversation with the engineer assigned to my interview was really cool. There were some questions about work I had done and various problems I encountered over my career and how I solved them. It didn’t get too technical but was varied. I also had the opportunity to ask any questions I had about Automattic and I can’t remember all that I asked offhand but I do remember asking how the person interviewing me got started at A8C and what they worked on.

Shortly after the interview I was notified that I was going on to the next stage of the process which was the code exercise.

The code exercise

I’m not going to go into too much detail here about the exercise given as a part of the interview process, however here’s some general information.

  • I was given a week to do it.
  • I was told not to spend anymore than 4-6 hours on it (I think initially I spent about 4 hours).
  • I was given access to a private repo that contained instructions.
  • It involved a custom WordPress plugin.
  • It involved a few logic questions related to the questions involving doing code and also finding a given number of known security issues.

One thing I really appreciated about this exercise is it wasn’t theoretical. The questions and tasks were based on real-world scenarios and not “tricks” or designed to fool you. Realistically, if you are a good programmer, the exercise should be fairly straightforward.

For my first attempt, I naively thought I did pretty good, however I did have difficulty finding all the security issues. I submitted the work I did after two days. At this point I was starting to get really excited about the potential for working at Automattic and wanted to get the process over one way or another. I’m the kind of person that once I make a decision I just want to move forward. So it’s always been difficult for me going through the application process of a job.

So you can imagine how I was feeling when I didn’t hear back from the engineer assigned to review my code exercise for 5 days! To top it off, as a part of the initial review, there was a silly mistake I had made in one of the logic improvements and I was told I missed a few security things.

I took a more slow, measured, and closer look and discovered that the security things I missed were pretty basic and I felt pretty silly for having missed them. At this point I was feeling pretty bummed and thought I had blew it. Still I corrected my mistakes and re-submitted for review.

Two days later I got a response with a detailed review in the slack channel split by an overview of the “good parts” contrasted against what “needed improvement”. I really appreciated this because it was insightful, valid, and specific critique that showed this exercise was taken seriously by the team.

Of course, the words that stood out the most to me at first were:

Given the above, we’d like to move you forward 🎉! The next step of the process is a trial project together. We’ve found it’s the best way for both sides to make sure it’s a great fit.

Oh ya baby, on to the next step!

The trial project

Going into the hiring process, I scoured the internet for anything I could find on what to expect. From the Automattic website, and various posts from other folks who had gone through the process, I expected that if I made it to the trial stage, I’d be working with a team on an actual project. However, it turned out to be a bit different. It turns out that fairly recently the hiring team had decided to streamline things a bit to help improve the velocity of hiring. Here’s what I experienced:

  • I was given access to a repository with a project that I would work on myself. It was close to a real world project with a defined scope of what to accomplish during the trial.
  • The project was paid ($25/hr USD). One thing that is really interesting here is that the rate is the same for everyone doing a trial at Automattic. It doesn’t matter what position is being applied for. It also has no relation to what the eventual salary would be.
  • It’s part time and I was able to decide when I started on it. I also had the flexibility of determining how much time I gave to it each week.
  • The average total time people spend on the trial is around 40 hours.

The trial itself was fairly straightforward and was a good opportunity for me to demonstrate my JavaScript skills. At the start of the trial, I was added to a few more Slack channels that opened up the ability to seek help from and engage in conversations with other Automatticians.

Communication during the trial (besides via Slack) was done through work in the GitHub repository and also in a P2 created specifically for the purpose of the trial.

I started work on the trial as soon as possible (which was July 1st) and in all I spent a total of 13 hours on the trial over the course of 3 days completing all the tasks and found out on my birthday (July 4th) that I had progressed to the next stage and would be recommended to Matt (Automattic’s CEO) for hire. WooHoo!

The final chat

At that point I figured (based on the research I had done) that I would have the “Matt chat” which would be the final decision on whether I would be hired or not, but this was another thing that had changed and now the final chat is done by someone from HR. To be honest I was a little bit disappointed because I was looking forward to the opportunity to chat with Matt but given how much the company had grown, I understood the change.

The final chat with HR happened one day later (July 5th) and essentially revolved around admin related tasks related to getting hired. The decision was effectively a done deal, what was left was discussion around compensation.

Compensation was something I was kind of nervous about. I was doing pretty well financially prior to going into the hiring process and was concerned that the compensation offer would come back significantly lower than what I expected and I’d have to make a decision on whether the other perks of the job would make it worth it. Some primary motivators for me applying to Automattic were:

  • my experience working with other Automatticians on Gutenberg.
  • being paid by Salary instead of hourly.
  • unlimited AFK policy (vacation) – no more having to be concerned about how I’ll cover the expense of not only vacation time but also the hours I’m not working.
  • Benefits – while Canada has a great public health plan, there’s still a lot that is not covered. A8C’s benefits are really good.
  • Matching retirement investment plans.
  • The opportunity to travel and see people I work with in person. This was a huge one for me! Not only do I like travelling to different places, but having the opportunity to meet my peers in person and collaborate on things during time together was a huge plus.

So, given the above, I was actually prepared for a small drop from my current income but was quite pleased when they came back with a compensation offer that was more than I was expecting.

On July 8th the offer came in and I accepted. I requested a start date of July 23rd which allowed me to give a couple weeks notice to Event Espresso and wind down work there. I was so excited!

And so it begins

So what has the last year been like working at Automattic?

In short, amazing.

At Automattic, communication really is oxygen, and I had fun navigating the “chaos” among the firehose of internal communication channels that exist. Fortunately, one of my strengths is deriving systems and processes for consuming information so I’ve been able to develop a process that works well for me to filter through the things both relevant to what I work on and also what I’m interested in.

I was placed on a team that works on WooCommerce Blocks and I’ve also been serving as sort of a bridge between the WooCommerce division and the Gutenberg project given my involvement there. In the course of the past year, here’s a sampling of what I’ve done:

  • I worked on and contributed towards the release of some pretty cool new WooCommerce blocks involving living on the edges of what’s possible with Gutenberg (All Products block, new Cart and Checkout blocks).
  • I’ve helped solved some gnarly challenges around supporting multiple WordPress versions.
  • I’ve given learn-ups and helped level up peers in using the the @wordpress/data API. This also resulted in a series I wrote here on Unfolding Neurons.
  • I was given the responsibility of leading a squad within my team and then eventually the squad becoming it’s own team (with me as the lead) just this month.
  • I went to Orlando to attend the Grand Meetup (every year the entire company gets together in one location for 10 days) and also went to Kelowna for a team meetup.
  • I write a summary of JavaScript and Gutenberg news for the entire company weekly.

I’ve also continued to level up my skills as I learn from some of the brightest folks I’ve had a chance to work with.

What is great about Automattic

Here’s just a few things that have really stood out for me as great things at Automattic.

  • There’s a real sense that folks really care about you.
  • Automattic’s response to the pandemic and the turmoil happening as a result has been stellar. Everyone has regularly been encouraged to take time off whenever needed and so much thought and care has gone into communication and adjustments around the real impacts of pandemic on everyone’s day to day.
  • Also related to pandemic, Automattic has been quick to address the safety and well being of all employees. Very early on, all travel was cancelled and disallowed and the Grand Meetup scheduled for September was cancelled. This policy shift continues into 2021. Of course, it’s a bit of a bummer given one of the things I was excited about in coming to work for A8C was the travel, but I so appreciate a company being proactive given the danger the pandemic poses.
  • Working with folks from all over the world is something I had some experience with previous to Automattic, but multiply that by 100 and I love the cultural and diversity not only geographically but also in so many other ways within the company.
  • The trust given to employees is amazing. From day one of my start date, I was able to access any P2 in the company, including P2’s discussing pending acquisitions (like Tumblr!). At first, I thought I had stumbled onto some P2s by mistake and shouldn’t have access that information but was reassured that’s not the case. Collaboration and feedback is welcome from anyone in the company across different teams. That’s really cool.

I also want to highlight an interesting observation relevant to me being involved in the WordPress project and community prior to Automattic. There is this stigma that exists to a degree in the community that Automattic enjoys special privilege and has hidden agendas for the community. While it’s certainly true that a number of Automatticians work full time on the project as a result of A8C’s contribution towards “5 for the future”, I am truly amazed at the care and concern folks within the “dotOrg” division of A8C give towards avoiding favoritism and “shortcuts” for competing interests within the company and getting things done in WordPress or Gutenberg. There are constant reminders and prompts internally to keep discussions “in the open” and work with the community on projects.

Certainly Automattic has an outsized influence on the direction of WordPress given the contributions it’s made, but from what I’ve seen, that influence has pure motivations and is not exclusive.

It’s not always rosy

Of course, as it is with any human organization, things at Automattic aren’t by any means utopian or perfect. There’s certainly some things that I’ve encountered in my time there that are also unfortunate. At Automattic there are some things that are still struggles in some cases:

  • Career progression is very self-guided. While there are opportunities for growing and trying new things, the initiative is very much on the individual to take those opportunities. It’s hard to have a sense of where one fits in as far as “levels” (if you’re into that sort of thing). Still, there are some folks that might find that a bit difficult (personally, I’ve always been a self-directed individual anyways, so this isn’t really a big deal to me). It’s worth noting though, that team switches are possible which can provide the opportunity to switch to another division in the company where you can learn new things and grow your skills in other areas.
  • Siloing happens. While on the whole, cross team collaboration happens and is encouraged, the reality is that as the company has grown, it appears that at least some of it’s systems and processes (especially with acquisitions) have failed to keep pace with the impact this brings on the overall company. To a degree, I think this is inevitable. But I do see evidence of the impact this has had on culture and product development. The really great news is (from my observations at least) is that folks are aware of this and are actively working on addressing it. It’s really interesting to watch this unfold (and participate in!)

There are other things as well that I don’t feel at liberty to share in a public forum but the thing I’m impressed the most about is that for the most part, when problems surface they are addressed in one form or another. I’m impressed with the sensitivity and thought that has gone into addressing those problems.

Well, this has turned out to be a bit more than I thought I’d write, so I think I’m going to stop here.

By the way, we’re hiring!

WordPress Data: Registry

This entry is part 13 of 15 in the series, A Practical Overview of the @wordpress/data API

At various places so far in this series, you would have seen me use phrases like:

  • “registered store within the current registry”
  • “current registry dispatch” or “current registry select”

I’ve also mentioned that I’d get to this mysterious “registry” reference later in the series. Well, here we are!

What problem does the Registry solve?

As a part of diving into what the registry is in the context of, it’s good to take a brief look at what problem it solves and to do that we’re going to take a bit of a look at one of the cool features of the new Block Editor: reusable blocks.

With reusable blocks, you are able to select a group of blocks in the editor and convert them to a reusable block to be saved for reuse throughout your site in different posts/pages etc. It’s basically a saved instance of a specific set of blocks and content. This feature is neat because:

  • Common patterns for content can be saved for re-use on a site saving time in recreating that content everywhere.
  • You can edit a saved reusable block and any other place it is in use will automatically update with the new content.

Reusable blocks are actually powered by saving the content to a special post type created for them:

  • The contents of a reusable block are saved to a post type.
  • The reusable-block reference is saved to the content of the post it is used in.

Let’s look under the hood here. Let’s say I’ve created something like this:

I’ve saved this as a reusable block named “Test Reusable Block” and note it’s composed of three blocks: a paragraph block with the text “This is a reusable block test”; a quote block with the text “With a quote”, and a button block with the text “And a Button”. When I save the block and the post, what has happened under the hood?

First, the reusable block itself is saved as a post object for a custom post type called wp_block. The content of the block itself is saved to the post with this serialized block structure:

<!-- wp:paragraph -->
<p>This is a reusable block test</p>
<!-- /wp:paragraph -->

<!-- wp:quote -->
<blockquote class="wp-block-quote"><p>With a quote</p></blockquote>
<!-- /wp:quote -->

<!-- wp:button -->
<div class="wp-block-button"><a class="wp-block-button__link">And a Button</a></div>
<!-- /wp:button -->

Notice the familiar block comment delimiters used for representing blocks in post content.

Now, let’s take a look at how the reusable block itself is saved to the post content for the post implementing it:

<!-- wp:block {"ref":825} /-->

Here we have wp:block which indicates this is a reusable block, and the serialized attributes of {"ref": 825}, 825 is the ID of the post the content for the reusable block is saved to.

So essentially reusable blocks are embedded content from one post, into the current post. This is important for the context of what I’m writing about here because one problem this introduced early on in the usage of reusable blocks was how do you edit this embedded content?

Originally what would happen is that the reusable block component would fetch the content for the particular reusable block from its post and then inject those blocks into the editors block list state (which itself was a data store in However, this introduced some challenges with:

  • maintaining a cycle dependency between blocks in the post, and shared blocks from reusable blocks (affects everything from undo/redo tracking to extra complexity around keeping embedded blocks from reusable blocks from being edited separately outside of a reusable block edit context etc).
  • when fetching shared blocks from the server, it triggers the parsing of those blocks which means if they had already been rendered, the block will unmount/remount – which introduces a lot of overhead and performance impact.

So about a year ago, one of the project core contributors, Riad, introduced the idea of reimplementing reusable blocks as an embedded editor instance. Essentially, the editing and saving and rendering of a reusable block would end up self contained in it’s own editor instance and the main editor would only be concerned with the reference to the reusable block.

The path from the idea to the implementation took some work, but it paid off and ended up being a really great working solution.

Now back to the purpose of this particular post, how does reusable blocks relate to the registry? Well, one of the main things introduced as a result of this work was the concept of parent & child registries which allows for nesting editors using In the context of reusable blocks, the editor instance for each reusable block could utilize the main editor’s global stores for things like viewport and various settings, but override the other data stores in the embedded editor for block state etc specific to that instance.

So essentially you may have multiple registry instances occurring in any given main editor instance content.

So… what is a registry (or registry instance) then?

In the context of, a registry is an object where all configured namespaced stores and their states are kept. itself creates a registry object and the initial api is exposed for that registry by default (so it is known as the default registry). Essentially this means whenever you do this:

import { registerStore } from '@wordpress/data'

registerStore( /** store configuration object **/ );

…you are registering your store with the default registry. It also means that any of the other exposed api from @wordpress/data is connected to the default registry:

// all of these functions come from the default registry
import { select, dispatch, subscribe } from '@wordpress/data';

In most cases and most apps, this isn’t an issue as you’ll only ever work with the default registry. However, if you are doing cross store selects or dispatches, this can become problematic if your custom store is used in the WordPress editor environment where reusable blocks might be in play because reusable blocks are not using the default registry. Instead, they are using a child registry created and used within the specific editor instance serving that block and it’s contents.

Hence a whole class of new functions were created to solve the problem of keeping entry points to a namespaced store state working within the correct registry. Before we get to that though, let’s start with the function used to create a registry!

createRegistry( storeConfigs = {}, parent = null )

This function creates a new store registry instance and it can receive an optional object of initial store configurations. Notice, that it also can receive a parent registry as an optional second argument! When the parent registry is provided, this serves as the fallback for any api calls to a namespaced store that is not found in the child registry. In those cases if a call to select( 'my/store' ) doesn’t exist in the child registry, then the parent registry will be checked to see if it exists there.

You can see this function used in two places (in application code) in the current block editor: The first is when creating the default registry. The second is in the withRegistryProvider HOC, essentially this HOC will return (by default) return a child registry implemented by nested components within that provider that was created with the current registry in the app context as the parent. The function is also extensively for related tests as it provides a quick way to get a mocked registry instance for testing with.

withRegistryProvider is where we get introduced to the RegistryProvider. The Registry provider is using React.Context to expose a specific registry to the children within that provider. The current registry in a tree can be read using the useRegistry hook (or RegistryConsumer if using older context syntax) which is useful if you want to grab the current registry within a component. In most cases you will never need to worry about these apis in code you build, but it is useful to know that the Block editor uses many of these for managing complex state in the app (withSelect, withDispatch, useSelect, and useDispatch all implement useRegistry under the hood to make sure they are working with the correct registry in the scope of their implementation).

Next, let’s move on to a the class of functions that ensure you’re working with the registry in the current context/scope of the function being called:


This is used for controls that will do cross store selects or dispatches and will be provided the registry in the current scope of when this control is invoked. You can see this function in use in the @wordpress/data-controls package where the dispatch and select controls are implementing this to ensure the selects and dispatches through the controls are happening in the right registry.

This function receives a curried function. That is, you provide it a function that receives a registry as an argument and then return another function that receives the control action object and returns the result of that control. For instance, I might create a control that retrieves an item from a specific store (this is contrived, in most cases you’d just use the select control from @wordpress/data-controls for this use case):

import { createRegistryControl } from '@wordpress/data';

// my control action
export const getItemById = ( id ) => return ( {
    type: 'GET_DATA',
} );

// the control
export const controls = {
    GET_DATA: createRegistryControl(
        ( registry ) => ( { id } ) => {
            return 'my store with items' ).getItem( id );


Similar to createRegistryControl, this function is used for selectors that will do selects from other stores within the same registry as the context of the current selector being invoked. It’s basically a way to expose selectors from other stores in a given store so you can keep your client implementation simpler. You can see examples of it in use here.

While it to receives a curried function as the argument, it differs from createRegistryControl in that the first function will receive the select interface from a registry (not the entire registry). The function you return would be what your selector arguments would have normally been.

The example I gave above doesn’t really need to be a control (assuming the selector doesn’t have an attached resolver), I could have done it directly in my store’s selectors if I wanted, but let’s enhance this example with getting a specific property value from the returned item (assuming the item is an object):

// in my store selectors.js file

import { createRegistrySelector } from '@wordpress/data';

export const getPropertyFromItemInOtherStore = createRegistrySelector(
    ( select ) => ( state, property ) => {
        const item = select( 'my store with items' ).getItem( id );
        return item[ property ];

Finishing up

The registry can be a hard concept to wrap one’s head around but I hope I’ve helped pull back the curtain on the mystery surrounding it a bit in this post. As always, the comments are open if you have any questions or feedback (anything really useful I will use to update the post!).

You now have enough information to complete converting the cart data in our example application over to use (including cross store selections)! Give a go at doing that yourself as an exercise, or if you want to skip through to a completed example to play with, you can go here.

Thanks for sticking through with me in this series on There’s still two more bonus posts to come, but everything you’ve read so far should jump start your usage of in your projects and get you on the road to being an expert user!

WordPress Data Store Properties: Action Creator Generators

This entry is part 9 of 15 in the series, A Practical Overview of the @wordpress/data API

Now we have a resolver for asynchronously getting products via a REST api endpoint, we have some action creators, and we have a selector.  We haven’t got to the reducer yet, but before we get to it, let’s consider another problem we haven’t addressed yet. 

When we update or create a product using our ui, how are we going to persist that to the server? We could just wire up the POST/PUT endpoints directly in our component, but remember one of the reasons we’re implementing is to provide a common interface for any component needing to interact with the products data. So what can we do here?

Fortunately, we’ve covered a part of the api that provides us with a mechanism for doing this, controls. With controls, it is possible to make action creator generators that yield control action objects for handling asynchronous behaviour.  Essentially everything you’ve learned about resolver generators so far can also be applied to action creators!

So let’s go back to the action creators we’ve already prepared (reminder: src/data/products/actions.js) which are createProduct, updateProduct and deleteProduct. Right now these are not wired up to any API route. Let’s change that. If you feel adventurous, why don’t you give it a go in your own fork of the app right now and see what you come up with. You can come back and see how close you got after.

You back? Okay, here’s one approach you could end up with:

import { getResourcePath } from './utils';
import { fetch } from '../controls';
import TYPES from './action-types';


export function* createProduct(product) {
 const result = yield fetch(getResourcePath() + "/add", {
   method: "POST",
   body: product
 if (result) {
   return {
     type: CREATE,
     product: result
export function* updateProduct(product) {
 const result = yield fetch(getResourcePath(, {
   method: "PUT",
   body: product
 if (result) {
   return {
     type: UPDATE,
export function* deleteProduct(productId) {
 const result = yield fetch(getResourcePath(productId), {
   method: "DELETE"
 if (result) {
   return {
     type: DELETE,

Take note of a few things here:

  • I’m using the fetch control (remember this from the post about resolvers?)
  • I’m returning the original action object that was here before the updates.

Notice that the action creator is still returning an action object, the only change here is that with controls, we’re able to do some side-effects before returning that action object.

In the next post of this series, it’s time to go on to the last property of our store registration configuration object, the reducer.

What is WordPress Data?

This entry is part 2 of 15 in the series, A Practical Overview of the @wordpress/data API

The WordPress data module (or package) is a critical component of the new WordPress Editor as demonstrated by it’s implementation in (at the time of writing) 7 other WordPress packages:

  • @wordpress/block-directory registers a store with the core/block-directory namespace.
  • @wordpress/block-editor registers a store with the core/block-editor namespace.
  • @wordpress/blocks registers a store with the core/blocks namespace.
  • @wordpress/core-data registers a store with the core namespace.
  • @wordpress/edit-post registers a store with the core/edit-post namespace.
  • @wordpress/editor registers a store with the core/editor namespace.
  • @wordpress/notices registers a store with the core/notices namespace.

What do I mean by “namespace” in the above list?
We’ll get into this in more detail later on, but as a quick answer, stores can be registered to their own state object identified by a given string which is referred to here as their namespace. So to interact with the data stored in a given state, you need to know and reference it’s namespace.

Read more “What is WordPress Data?”

WordPress Data Store Properties: Reducer

This entry is part 10 of 15 in the series, A Practical Overview of the @wordpress/data API

In the exploration through, we’ve already created the necessary properties needed to register the custom products store for our example app. This includes, action creators, selectors, controls, and resolvers.

Now it’s time to get to the last property we need for registering our store, and that’s the reducer. The reducer is arguably the most important part of the store registry and in fact it’s the only option you are required to include when registering a store.

The reducer function describes the shape of your state and how it changes in response to actions dispatched to your store. It will receive the previous state and an action object as arguments and should return an updated state value (or the same state object if nothing has changed). Here’s a very basic example reducer:

export const reducer = ( state, action ) => {
   if ( action.type === 'DO_SOMETHING' ) {
        return {

There’s a couple principles you should follow with your reducer function:

  • It must be a pure function.  Pure functions are functions that have no side effects.
  • It should never mutate the incoming state, but return a new state with any updates applied. With that said, you can split up a large state tree so that each branch is managed by it’s own reducer and then combine them all into one reducer function (using combineReducer).

With that in mind, feel free to give a go at writing a reducer for our example app and when you’re done, come back here and compare with what I’ve put together.

Done? Alright let’s compare:

import TYPES from "./action-types";
const reducer = (
 state = { products: [] },
 { products: incomingProducts, product, productId, type }
) => {
 switch (type) {
   case CREATE:
     return {
       products: [...state.products, product]
   case UPDATE:
     return {
       products: state.products
         .filter(existing => !==
   case DELETE:
     return {
       products: state.products.filter(existing => !== productId)
   case HYDRATE:
     return { products: incomingProducts.products };
     return state;
export default reducer;

Let’s breakdown the above example a bit. What is happening?

First, our reducer function is defining a default value for the state argument of { products: [] }. This ensures that if the state has not been initialized yet, it will assume this value. It’s a great way to describe the shape of the state in your store (especially if your store is fairly simple with a single reducer function).

Second, our reducer function is checking what type is coming from the provided action object and defining what to do if a specific type is passed in on the action object. So for instance, this particular reducer is listening for the CREATE, UPDATE, DELETE, and HYDRATE type constants and reacting accordingly.

Third, each type has logic associated with it that will update the state using the data provided by that action object. So an action object with the CREATE action type constant value will return a new state object with the passed in product added to the products array. An action object with the UPDATE action type constant value will return a new state object replacing a product in the products array with the incoming product (or adding it if it doesn’t exist in the state yet). An action object with the DELETE action type constant value will return an new state object with the products array excluding the provided An action object with the HYDRATE action type constant value will replace the products array in the state (with the provided array of products).

Note: Notice how the HYDRATE action type is using the products key on incomingProducts. This is because the dummy data that the service is returning has that actual products on that key.

Finally, if the action object does not have a matching type for what the reducer is watching for, then the current state is simply returned.

The reducer plays an important role in a store because it’s the method by which the state is updated and signals to the app subscribing to the store any state changes.

In the next post of this series we can finally put all the parts together to register our store!

WordPress Data Store Properties: Resolvers

This entry is part 8 of 15 in the series, A Practical Overview of the @wordpress/data API

Before we took our side detour into learning about the controls property, I mentioned that since we now have a selector for getting products, it’d be good to implement a way to retrieve the products from the server. This is where resolvers come in.

Let’s get started by creating a resolver for retrieving products. If you’re following along, create a new file in the store folder called resolvers.js. Your file structure should end up being src/data/products/resolvers.js.  Add this to the file:

import { fetch } from "../controls";
import { hydrate } from "./actions";
import { getResourcePath } from "./utils";
export function* getProducts() {
 // the service returns paginated results. For the purpose of this demo
  // we only need 10 hence the limit param.
  const products = yield fetch(getResourcePath() + "?limit=10");
 if (products) {
   return hydrate(products);

In our example, the following things are happening:

  • This first expression yields the fetch control action which returns the response from the window.fetch call when it resolves.  If you think of the yield acting like an await here, it might make more sense. The return value is assigned to products.
  • If there is a truthy value for products, then the hydrate action creator is called and the resulting value returned which essentially results in dispatching the action object for hydrating the state.

Note: getResourcePath() is just a utility I created for setting up the url interacting with the dummyjson API. It should be in the products/utils.js folder already for you. Note, in the above example I’ve also added a limit parameter since we don’t need the full list of products the service returns.

The hydrate creator is new here, we haven’t created it yet. This is a good opportunity to think through what you may need to do for the hydrate action. So take a few minutes to try that out yourself and then come back.

Back? How did it go? You should end up with something like this in your action-types.js file:

const TYPES = {
export default TYPES;

Notice that we’ve added the HYDRATE action type here.  Then in your actions.js file you should have something like this:

export const hydrate = products => {
 return {
   type: HYDRATE,

Now that you have that good to go, I want to take a bit of time here to let you know about some important things to remember with resolvers:

The name of the resolver function must be the same of the selector that it is resolving for.

Notice here that the name of this resolver function is getProducts which matches the name of our selector.

Why is this important?

When you register your store, internally is mapping selectors to resolvers and matches on the names. This is so when client code invokes the selector, knows which resolver to invoke to resolve the data being requested.

The resolver will receive whatever arguments are passed into the selector function call.

This isn’t as obvious here with our example, but let’s say our selector was getProduct( id ) our resolver will receive the value of id as an argument when it’s invoked. The argument order will always be the same as what is passed in via the selector.

Resolvers must return, dispatch or yield action objects.

Resolvers do not have to be generators but they do have to return (or yield, if generators) or dispatch action objects. If you have need for using controls (to handle async side-effects via control actions), then you’ll want to make your resolver a generator. Otherwise you can just dispatch or return action objects from the resolver.

At this point you may have a few questions:

  • How does the client know what selectors have resolvers?
  • Can I detect whether the selector is resolving? How? 
  • Is the resolver always invoked every time a selector is called?

Before answering these questions, I think it’ll help if I unveil a bit of what happens in the execution flow of the control system in and the implementation of generators in resolvers.

So let’s breakdown roughly what happens when the getProducts selector is called by client code using the following flow chart:

A flowchart describing the execution flow with resolvers in

Let’s step through this flowchart. When you call a selector, the first thing that happens is the selector returns it’s value. Then asynchronously, some logic also checks to see if there is a related resolver for the selector. If there is a resolver, then some internal logic will be used to determine whether or not resolution has started yet, or has finished.

Let’s pause here for a couple minutes and jump into a little side trail about resolution state.

Having a resolver that handles side-effects (usually asynchronously retrieving data via some sort of api) introduces a couple problems:

  • How do we keep the resolver logic from being executed multiple times if it’s initial logic hasn’t been completed yet (very problematic if we’re making network requests)?
  • How do we signal to client code that the resolver logic has finished?

In order to solve these problems, automatically enhances every registered store that registers controls and resolvers with a reducer, and a set of selectors and actions for resolution state. This enhancement allows to be aware of the resolution state of any registered selectors with attached resolvers.

The resolution state keeps track by storing a map of selector name and selector args to a boolean. What this means is that each resolution state is tied not only to what selector was invoked, but also what args it was invoked with. The arguments in the map are matched via argument order, matching primitive values, and equivalent (deeply equal) object and array keys (if you’re interested in the specifics, the map is implemented using EquivalentKeyMap).

With this enhanced state (which is stored in your store state on a metadata index), you have the following selectors (for usage examples, we’ll use our getProducts selector as an example) :

getIsResolvingReturns the raw isResolving value for the given selector and arguments. If undefined, that means the selector has never been resolved for the given set of arguments. If true, it means the resolution has started. If false it means the resolution has ‘data/products’ ).getIsResolving( ‘getProducts’ );
hasStartedResolutionReturns true if resolution has already been triggered for a given selector and arguments. Note, this will return true regardless of whether the resolver is finished or not. It only cares that there is a resolution state (i.e. not undefined) for the given selector and ‘data/products’ ).hasStartedResolution( ‘getProducts’ );
hasFinishedResolutionReturns true if resolution has completed for a given selector and ‘data/products’ ).hasFinishedResolution( ‘getProducts’ );
isResolvingReturns true if resolution has been triggered but has not yet completed for a given selector and ‘data/products’ ).isResolving( ‘getProducts’ );
getCachedResolversReturns the collection of cached ‘data/products’ ).getCachedResolvers();

You will mostly interact with resolution state selectors to help with determining whether an api request within a resolver is still resolving (useful for setting “loading” indicators). The enhanced resolution logic on your store will also include action creators, but typically you won’t need to interact with these much as they are mostly used internally by to track resolution state. However, the resolution invalidation actions here can be very useful if you want to invalidate the resolution state so that the resolver for the selector is invoked again. This can be useful when you want to keep the state fresh with data that might have changed on the server.

As with the selectors, the action creators receive two arguments, the first is the name of the selector you are setting the resolution state for, and the second is the arguments used for the selector call.

Action CreatorDescriptionExample Usage
startResolutionReturns an action object used in signalling that selector resolution has ‘data/products’ ).startResolution( ‘getProducts’ );
finishResolutionReturns an action object used in signalling that selector resolution has ‘data/products’ ).finishResolution( ‘getProducts’ );
invalidateResolutionReturns an action object used in signalling that the resolution cache should be invalidated for the given selector and ‘data/products’ ).invalidateResolution( ‘getProducts’ );
invalidateResolutionForStoreReturns an action object used in signalling that the resolution cache should be invalidated for the entire ‘data/products’ ).invalidateResolutionForStore();
invalidateResolutionForStoreSelectorReturns an action object used in signalling that the resolution cache should be invalidated for the selector (including all caches that might be for different argument combinations) ‘data/products’ ).invalidateResolutionForStoreSelector( ‘getProducts’ );

Now that we know about the resolution state, let’s return to the flowchart. We left off at the point where has determined there’s a related resolver for the given selector and is determining whether resolution has started yet or not. Based on what we just looked at, you should know how it does this. Right! It’s going to call a resolution state selector. The specific selector called here is hasStartedResolution. If that returns true, then will basically abort (because the resolver is already running, or completed asynchronously).

If hasStartedResolution( 'getProducts' ) returns false however, then will immediately dispatch the startResolution action for the selector. Then the getProducts resolver is stepped through.  Now remember getProducts is a generator. So internally, will step through each yield it finds in the resolver. Resolvers and controls are expected to only yield action objects or return undefined.

The first value yielded from the resolver is the fetch control action. Since recognizes that this action type is for a control it then proceeds to invoke the control.  Recognizing that the control returns a promise, it awaits the resolution of the promise and returns the resolved value from the promise via calling the generators promiseResult ) function with the result.  This assigns the value to the products variable and the generator function continues execution. If there are no products, then the resolver returns undefined and this signals to the resolver routine that the resolver is done so will dispatch the finishResolution action for the selector.

If there are products, then the resolver returns the hydrate action. This triggers the dispatching of the hydrate action by and the dispatching of the finishResolution action because returning from a generator signals it is done. When the hydrate action is processed by the reducer (which we haven’t got to yet), this will change the state, triggering subscribers to the store, which in turn will trigger any selects in the subscriber callback. If the call to the getProducts was in subscribed listener callback, it will then get invoked and the latest value for getProducts (which was just added to the state) will get returned.

That in a nutshell (a pretty big nutshell at that), is how resolvers work!

Sidenote: if you want to dig into the technical logic powering this, check out the @wordpress/redux-routine package. This is implemented automatically as middleware in to power controls and resolvers, but can also be used directly as middleware for any redux based app.

WordPress Data Store Properties: Actions

This entry is part 5 of 15 in the series, A Practical Overview of the @wordpress/data API

In the previous post of this series, we took a brief interlude to take a look at an app that is using tree state to manage the data the app is using. We discovered that, while the app was functional, some potential problems with using state this way were beginning to surface. In this post, we’re going to start learning about the various data store properties (the properties of the options object we use when registering our store) and in the process we’re going to convert this app over to use!  For starters, we’re going to focus on the products data in our store, so let’s just leave the cart data alone for now.

If you’re more of a hands on person, you can participate in this conversion by going to this sandbox I’ve prepared. It’s the same app as what I introduced in the previous post except with a few differences:

  1. We’re going to wire things up to an external server and to do this we’ll use a free service called  I’ve set up a simple component to help wire things up to this service. This will become important later when we start working on communicating with this service for our app. The dummyjson service mocks an API for product data. The updates and deletes are simulated, but for our purposes it’s a great tool for demonstrating reads/writes from a server or API.
  2. You’ll see in the src folder a data folder.  We’ll use this folder to keep our new data store in.
  3. Inside the data folder you’ll see a controls.js file. I’ve gone ahead and created a control that works with the native window.fetch API. Don’t worry about this for now, we’ll see how it’s used later on.

You can go ahead and fork this sandbox so that you can make it your own as we work through the first property of the configuration object for registering our store.

Read more “WordPress Data Store Properties: Actions”