The 7 Deadly Sins of Developer Experience with Cristiano Betta #APIDAYSAU

One of the sessions I enjoyed the most at API Days 2018 was Cristiano Betta’s talk on Developer Experience (DX), i.e. how to more effectively engage with developers who are consuming your API’s.  The learnings go beyond developer onboarding specifically, and are applicable to product development in general – which is partially why it was so cool.

Slides here: https://betta.io/blog/2017/11/10/the-seven-sins-of-developer-experience

I also caught-up briefly with Cristiano afterwards where he expanded on a couple of points, as the talk he gave was a slightly shorter version of a longer talk.

An overarching theme was reducing cognitive load through the use of fundamental design principles.  The deadly sins he covered were mainly around information:

  1. Too much
  2. Too soon
  3. Too little
  4. Unstructured
  5. Unsupported
  6. Incomplete

…with “no control” over tools as #7.

There was a variety of points of interest that I noted down, which I’ll briefly cover, but the things that really grabbed my interest were:

  • “Too little / too late”, which is effectively about taking a holistic approach.
  • The idea of measuring and responding to developer friction.

Note – the focus of Cristiano’s talk is around the developer experience in terms of on-boarding rather than the API design itself – for more info on that (developer experience in terms of API design) you might want to check out something like APIs You Won’t Hate.

Too Little, Too Late

This is partially about documentation – but not in the sense of manuals, it’s more about providing enough information when it is needed.  Case in point: resolving errors.

The example Chistiano gave was when a developer is making a call to your API (probably the for the first time) and they encounter an error – e.g. related to input.  Let us say they call your API which provides this response:

{
  "error": "000123 - Invalid input"
}

What you want to avoid is the situation where the developer needs to resort to internet searching.  Sure, you might have it covered in your help documentation:

Developer Guide – Error Codes – Page 421

Error 00123 – Invalid input.  Occurs when you use a boolean on a Friday, on Friday you must use an int: 0 = false and 1 = true.

Your problem is that developers will already have formed familiar techniques for dealing with issues like this, probably using online resources – resources they are familiar with, and which through habit present them with a relatively low cognitive load.

There are many reasons why this is bad: you have no control over the experience, how long it will take and how frustrated they will get – not to forget “OMG, I can’t believe they don’t just say that” and “why is this so unnecessarily hard” comments all over  StackOverflow.com.

What’s The solution?

A better way to do it is to include useful information in the responses error message itself:

{
  "error": "000123 - Invalid input. 
    Occurs when you use a boolean on a Friday, on Friday 
    you must use an int: 0 = false and 1 = true."
}

Yes, you can also have this information in your developer guide.  The trick is including the relevant information when it’s needed; not too much, not too little, and just at the right time.  This leads on nicely to another cool concept…

Developer Friction

Adrian Trenaman’s QCon NY 2017 presentation on Developer Experience included the idea of minimising “the distance between ‘hello, world’ and production”.  In that context he was discussing development in a holistic sense (tooling, environment, and so on) where you are employing developers, but as Cristiano explained to me, you can also look at “developer friction” in the context of developer adoption of your APIs.

In this context, developer friction is effectively the amount of time between (a) making an API call that errors and (b) the first successful call to the same API – or some meaningful variation along those lines, such as the time between developer registration and their first successful API call.

So, imagine that you have 10 developers a day signing up to your API and making their first ‘hello world’ call.  Let’s say 50% of them get an error the very first time they make a an API call, and on average 90% of those developers are able to make a successful call within 2 minutes.  Now compare that to a situation where 80% get an error the first time, and of those 90% take on average 2 hours to make a successful call.  Clearly the second situation has much higher developer friction.

According to Cristiano, some organisations use techniques like this to monitor adoption of their APIs and specifically to help them identify areas where their overall developer experience may need improvement.

Other Gems

I won’t go into these concepts in much detail, and hopefully you should be at least aware of them already – if not I kindly (and strongly) suggest you check them out.  Cristiano’s slide-deck is a great place to start.  It covers a lot more than what I have included here.

Cognitive Load, Overload and Progressive Disclosure

Cognitive load refers to the effort being used in the working memory; cognitive overload is where (for example) a learner is unable to simultaneously process a certain amount of information or tasks.  Solutions to this include:

  • Chunking information up, e.g. into lists of about 8 items, with a useful heading.
  • Applying the 80/20 rule, e.g. call out the small number (~20%) of items that developers are most likely going to be seeking, especially if they are new to your platform, and leave the other 80% accessible but through other navigational means.

That second point is an example of Progressive Disclosure, a great technique for managing cognitive load, covered in detail in the book “Universal Principles of Design”.

Another really interesting pitfall around cognitive load was around asking people question, like on sign-up forms:

sins-062-98658263

As Cristiano explained, this may look simple but it raises a lot of questions in peoples heads – questions which might not seem a big deal to you but can be problematic for others (especially if it’s mandatory):

  1. Who will see this?
  2. Can I change it later?
  3. What do you need it for?

These 3 simple question really resonated with me, and they provide a simple checklist you should consider when reviewing questions you ask your customers.  I know from firsthand experience that questions like this, in some circumstances, often force me to stop and think way more than should be necessary.

Tools Out of Control

This is where community tools and SDK’s are more obvious than yours.  Unfortunately Cristiano didn’t have time to go into this in a lot of depth in terms of solutions, but clearly SDK’s and other tools are a integral part of your offering, and a critical part of DX; therefore it’s critical to have a plan in place for managing these as part of your product.

This is most likely going to include monitoring the community – where they are; understanding what tools they want; staying engaged.

Using Structure

Another nice 3 point list was around structure – i.e. allowing people to navigate through the information you provide them:

  1. Where am I?
  2. Where can I go?
  3. Where did I come from?

Telling a Story

Whilst having information in inherently useful structures is good, you can augment this in key situations (such as developer onboarding) with Story Telling – another technique covered in the Universal Principles of Design.

Cristiano cited Pusher as an example of doing this well – the “hello world” make your first app story.  Here’s the screenshots, as you can see the path from account creation to “hello world” has been streamlined, and users can easily opt out of this if they want.

sins-120-c2bdc012

sins-121-02b36e95

sins-122-84eca6c6

References

#ApiDaysAU

Real World Machine Learning with Susie Sheldrick #APIDAYSAU

I’m at the API Days conference, and one of the first sessions of note was Deep Learning: Real World Applications with Susie Sheldrick, which explored some of the practical real-world challenges related machine learning, based on experience.  I also caught up with her after the session where we expanded on some of curlier questions.

Quick Context: 30 Second Intro to Machine Learning

Susie kicked off with a simple diagram that sums up what machine learning is in comparison to traditional applications:

DeepLearningSusanSheldrick

Machine learning partially turns this model on its head: the solution is able to “learn” its own rules (through training its internal rules model) at much greater scale than some person/team coding them by hand.  So, rather than feeding data and manually created rules into an solution, simply train the solution to produce its own rules.

The Chaser

This nice intro kicked off a mental train of thought for me: in practice the more complete solution probably looks something like this:

DeepLearningAdrianKearns

The end goal is still to build an solution that provides the answers users were seeking, we’re simply using machine learning to help out with the rules.

Devil in the Detail

That all sounds wonderful on paper – or in ivory-tower pixels – but, as should be no surprise, the real world is not so straightforward.

Of critical importance:

  • Understanding the problem you’re trying to solve.
  • Gathering the right data to train the model.

This is much easier said than done, it transpires that:

  • It’s all too easy to inadvertently train bias into the rules model.
  • Tracing exactly how the AI made a specific decision actually turns out to be really hard.

Whilst the second point has obvious implications for developers and testers, both points combined have massive implications for your legal teams, anyone who considers themselves ethical (like you, right?), product owners and anyone at the receiving end of a machine determined decision.

Bias

Susie gave some examples of unexpected and undesirable bias ending up in rule models, such as one experiment that determined prisoners eligibility for parole.  It turns out that the model significantly favored granting parole to white prisoners and was relatively much less favourable to prisoners of colour.   In contrast, in terms of parolees reoffending – the actual results were the exact opposite of the bias.

It turns out that the information used to train the model was “correct” but only in the sense that it faithfully transposed the bias already inherent in the legal system, against people of colour.

True Representation

A related issue isn’t so much of bias in the data, but of bias stemming from an absence of data.  Once more issues of race come to the fore; this time it was a passport application solution that told an Asian gentleman his submitted photo “did not meet our standards” because he was “asleep”.  As you might be able to guess, the model had obviously not been sufficiently trained with data that faithfully represented the entire user base, and therefore could not correctly handle non-european facial features.

Just to be crystal clear, the technology is more than capable of correctly handling a wide range of cases, nuances and subtlety – including racially based facial features.  The actual issue is the correct training of the model – meaning it’s critical to gather the right data, data that covers the entire spectrum of cases.  Not to mention testing and monitoring the behaviour of the solution.

Building an AI Solution: Custom or OOTB?

If you’re about to embark on a project that involves machine learning, one of the practical questions you’ll come up against is whether or not you can use an Out-Of-The-Box (OOTB) solution, or need to custom build something.  Susie’s discussion here was mostly in reference to the rule models specifically.  If you want a model capable of identifying cats in pictures online for you meme generator – you’re in luck, but if you need to correctly identify something more obscure, or more specific, you may have to build this model yourself.  Which is why the stuff above about bias is so important, because you’re going to have to navigate that minefield yourself.

Further Questions

Our chat after the session was very stimulating; a couple of the more curly questions that our conversation provoked were:

How to identify, and test for, unexpected bias?

The obvious ethical reaction to all of this is “great, let’s ensure we keep unwanted bias out of the model and our solution”.  What is much less obvious is how to do that.

Were the team behind the parole example conscious of the bias in that solution?  Let us assume they weren’t aware of it – in such a situation how would they (or you) identify that bias, and in addition, having established an operational solution how would you ensure none was introduced?

This is where, for me, machine learning is like a lens that amplifies human behaviours and bias.  It has the potential to help expose them, but how clearly, how soon, and at what cost?

How will your model react in the event of change over time?  I.e. if there is a fundamental shift in the (data) foundations on which the model was originally conceived and trained?

For example, Google is looking at moving back into the Chinese market, despite pulling out some years ago due to human rights concerns.  Hypothetical example: let’s assume that they have machine learning models built up, based on the data they currently have access to – i.e. does not include China’s current population of 1.3 billion.

What would happen if 1.3 billion Chinese people suddenly have access to a Google solution that is backed by a rules model that was not trained with them in mind?  Sure, Google’s data should be a fair representation of their current global user base, which will include Chinese – but wouldn’t adding 1.3 billion people potentially shift the model?  How will it react?  Will the responses it provides be biased against the new user population because hitherto they were not expected by the model?  Will the model be able to adapt over time, and if so how long will that be?

 

Please note that this post is based on rapidly scrawled notes in session and my recollection of subsequent discussions – my accuracy should be reasonable but may not be perfect.

References:

Fireside Chat with Zheng Li, VP of Product @ Raygun – Product Tank Wellington MeetUp

The Product Tank Wellington meetup ran a “fireside chat” last night with Zheng Li, who is currently VP of Product with Raygun – a Wellington-based company, currently on loan to the U.S.

The conversation covered her career path to Product via UX, advertising, championing women in tech and passion for business, as well as delving into specific topics with being a product person.

Here are the key takeaways I jotted down, which I’ve tried to organise by topic…

Career Path

Zheng gave us a neat little story about how she started out (in a sense): a classic tale of taking something that nobody else wanted to do and absolutely nailing it.

The task was designing banner ads for TradeMe.  She obviously attacked her self-imposed challenge with passion and drive (significant keys to success on their own), but I also noted that:

  • She formed a loose multidisciplinary team which (I think) included people with knowledge and access to data analytics and marketing folks.
  • Was data driven – each time she/they ran a new design, they would analyse the data to see what was working and what wasn’t, and think about why that was the case.

The other factor which she used to her advantage was being able to iterate at an appropriate speed – which was obviously supported by the data she and her team had access to.

Some pretty obvious takeaways there, a key one for me would be about being data driven / enabled > implication: you need to have the data.  As a data architect colleague of mine once said: before doing any data design, you must first think about what questions you will want to ask your data.

Other stand-out points around career path included:

  1. Turning weaknesses into strengths, by using them as differentiators.  The context for this was around credibility.
  2. Follow your passion.  Zheng laughed in response to a question – someone asked something which inferred she had planned her career out; she said that in retrospect her career may look like it was planned but the reality at the time was anything but.  Her response to challenges was to consciously seek out ways of addressing these – which in her case frequently included training courses, which she collectively found effective (I think for one particular area she did 7 different courses).
  3. People want to work with people they like and trust.  Zheng spoke of this in reference to relationships between companies, but it’s obvious from her perspective that this is based on interpersonal rapport.  It’s not hard to see this concept also applying at a personal career level – something I can attest to having also experienced it first-hand.

Another key career theme Zheng had was based on “that venn diagram” – meaning the three overlapping lenses in Design Thinking which cover business/viability, technology/feasibility and people/desirability.  The specific terms she used might have been a little different, but for me the connection was pretty clear.

Her basic advice was to become proficient and confident in any two of these lenses; although that seemed to be somewhat tempered with her other guiding principle of being customer focused – which suggests the business/viability and people/desirability lenses.

“Product” Means Being Close to Customers

This was one of Zheng’s key themes.  Part of this was getting out and talking to customers, which is critical.

It was interesting to hear of her experiences using product “management” (my term, not hers – can’t recall exactly what she called it) as a selling tool.  The basis for this was:

  1. Selling the value of the product, not the product.
  2. Establishing a 1-on-1 rapport with people, and understanding what kept them up at night.
  3. Taking the time to really understand that problem from different angles.

As far as point #3 goes, that meant engaging with different people in the organisation to understand the problem from their perspective: technical, marketing, sales, etc; this obviously links back to the three lenses of design thinking mentioned above, and being close to customers – all good sensible product management stuff.

We can also expand this theme out “customers” to “people”.  In her experience, product management is more about being people-based than technology based (this was mentioned in reference to a technical product for developers).

There was also a leadership angle: for her leadership was about aligning the purpose of her staff to the purpose of her business.  The implication here is to talk with the people on your team and really understand what drives them and where they want to go with their career.

A Quick Note on Persuasion

If you want to persuade someone (such as your product manager – if you’re a tech working on the product, and you have a pet feature you want to add), you need to two things:

  1. Speak in the language of the audience.
  2. Back it up with data.  This could be qualitative such as customer feedback, or quantitative data showing conversion rates.

Producty Bits

Dealing with Product Debt

Something I really liked was how she addressed debt – debt in the sense of technical debt, and even marketing debt, and so on: things which worked but could work better and had gotten to the point that they were affecting the bigger picture.  She referred to it (I think) as the “99 issues” or “99 problems” story.

  1. They got all the issues and logged them into Jira – meaning that they got it all out into the open.  Not just development/technical debt, everything.
  2. Presumably some sort of sizing and prioritisation work took place.
  3. They then knocked off a number of the items, reducing the overall debt.

The way she spoke seemed to indicate this was an annual event – which didn’t happen every year.  Bit of a spring-clean, I guess.  Zheng didn’t call it out specifically but based on her other comments I presume space in the teams capacity / product roadmap was allocated to this work.

Another interesting idea which occurred to me as she described this was the technique that Agile / Scrum teams sometimes use, whereby they adopt a sprint goal – something non-deliverable – that they want to improve during the course of the sprint/iteration/timebox.  Zheng didn’t explicitly say that was what they were doing but the idea seems relevant.  Zheng, if you ever read this I’d be interested to know if that concept was one you consciously used or were aware of.

Roadmap

Items on a roadmap (i.e. the implied promise / expectations set) should be based on two things:

  1. The teams capacity to deliver them.
  2. Evidence that a given feature is wanted by customers.

Pushing Back

Don’t be afraid to push-back.  If a customer requests a feature (for example) that  is outside your roadmap and/or ability to deliver then be wary of following the money.

This definitely fits with my experience; I tend to think that at a inter-business level or interpersonal level, the relationship needs to be built on mutual trust and respect – if the other party does not reciprocate then they’re probably not someone you want to be dealing with.

Zheng gave two examples:

  1. A major multinational effectively tried to bully their 50 wanted features on top of Zheng’s existing product roadmap – “you want our business or not”?  To have done so would have caused massive chaos within the company, affecting product delivery and so on.  Zheng counter-proposed a different approach which she and her teams could sustain.  The multinational rejected the offer and went elsewhere – only to return months later, accepting Zheng’s proposals.
  2. Another major company approached Zheng with features (she didn’t give specifics but I think we can guess their approach was more reasonable and more adaptable).  Zheng recognised that some of these features would be great differentiators for their product, so (presumably) some changes were made to the product roadmap and the featured added – in essence Zheng followed the money,  but did so because there was further advantage than just the money.

Final Thought: The Iron Triangle

At one point Zheng told an anecdote about a developer talking with her about code quality.  I forget the story but it reminded me of the the old “Iron Triangle” or project management triangle – the one that is made up of scope, quality and cost (or some similar combination; cost and time obviously being closely related).  The model effectively states that you can control any two; the implication being that if you nail people down in terms of scope and cost (or time) you have no control over quality.

I asked Zheng if she was familiar with that model and how she approached it.  Her answer wasn’t as clear-cut and direct as I would have hoped (which is not a criticism – having presented publicly I know how hard it is to provide an off-the-cuff answer that is cohesive and concise), but seemed to boil down to this:

  1. Her first substantive reaction was to discuss scope and features, so I would guess that this is her first priority.  This would align with her other comments that put great importance on being close to the customer and understanding their needs.
  2. Her second substantive reaction was to discuss product roadmaps, specifically in reference to their timing and how they are used as the basis for cross-team coordination (marketing and so on), so I imagine time would be her second priority.

By default this would leave quality to manage it self; but we shouldn’t forget the “spring clean” approach, whereby random items of debt (arguably involving quality) can be addressed in a structured way.

 

 

Use a Single Object Reference Globally in a UWP Application

Sometimes you might want a single object reference that can be accessed from any page within your application.  Creating an instance of it on the App using App.Current may be appropriate.

In this example we’re using a string, but you could easily use a custom object of your own creation.

In the App.xaml.cs file:

sealed partial class App : Application
{
  public string SomeValue;

  /// <summary>
  /// Initializes the singleton application object. This is the first line of authored code
  /// executed, and as such is the logical equivalent of main() or WinMain().
  /// </summary>
  public App()
  {
    this.InitializeComponent();
    this.Suspending += OnSuspending;

    // Do any initiation work here - just don't break the app 
    // or do anything that will slow it down too much.
    SomeValue = "New Value";
  }
}

To use in your code, e.g. on some page somewhere:

public sealed partial class SomePage : Page
{
  App _AppReference = App.Current as App;

  void SomeMethod()
  {
    _AppReference.SomeValue = "New Value";
  }
}

 

 

Using the Clipboard in UWP

How to Copy to Clipboard (Text String)

using Windows.ApplicationModel.DataTransfer;

...

dataPackage = new DataPackage();
dataPackage.RequestedOperation = DataPackageOperation.Copy;
dataPackage.SetText(text);
Clipboard.SetContent(dataPackage);

 

How to Paste from Clipboard (Image/Bitmap Data)

using Windows.Storage.AccessCache;
using Windows.Storage.Streams;

...

DataPackageView dataPackageView = Clipboard.GetContent();

if (dataPackageView.Contains(StandardDataFormats.Bitmap))
{
  IRandomAccessStreamReference imageReceived = null;
  imageReceived = await dataPackageView.GetBitmapAsync();
  if (imageReceived != null)
  {
    using (var imageStream = await imageReceived.OpenReadAsync())
    {
      var bitmapImage = new BitmapImage();
      bitmapImage.SetSource(imageStream);
      imgImage.Source = bitmapImage;

      dec = await BitmapDecoder.CreateAsync(imageStream);

      // https://docs.microsoft.com/en-us/uwp/api/windows.graphics.imaging.bitmapdecoder
      // https://docs.microsoft.com/en-us/uwp/api/windows.graphics.imaging.bitmapdecoder.getpixeldataasync#Windows_Graphics_Imaging_BitmapDecoder_GetPixelDataAsync
      var data = await dec.GetPixelDataAsync();

      imgBytes = data.DetachPixelData();
      imgWidth = dec.OrientedPixelWidth;
      imgHeight = dec.OrientedPixelHeight;

      // FYI, Overall image L-R, Top-Bottom.
      // Groups in reverse order: BGRA BGRA BGRA

      ByteArrayToPixelMap(imgBytes);
    }
  }
}

Dynamically Populate a UWP Grid using C#

This technique works for columns and rows, you just have to use the correct definition object, ColumnDefinition or RowDefinition.

Note the references to your specific grid instance (e.g. myGrid) and Grid methods (e.g. Grid.SetRow()).

myGrid.Children.Clear();
myGrid.ColumnDefinitions.Clear();

ColumnDefinition colDef;

for(int i = 0; i < myArray.Length; i++)
{
  colDef = new ColumnDefinition();
  myGrid.ColumnDefinitions.Add(colDef);
  colDef.MinWidth = 50;

  // Create a new instance of a TextBlock
  // or any other control; making a factory 
  // method(s) is usually wise.
  TextBlock txt = MyTextBlockFactoryMethod();
  myGrid.Children.Add(txt);
  Grid.SetRow(txt, 0);
  Grid.SetColumn(txt, i);
  txt.Text = myArray[i].ToString();
}

Application Version Info Panel

Here’s how to build a simple panel that shows the user the version info of your UWP application.

First the XAML / UI:

<Border x:Name="pnlInstallLocation" Padding="10" CornerRadius="5" Background="#FFB9D2F6" Margin="0,5,15,0" RelativePanel.AlignLeftWithPanel="True" RelativePanel.AlignRightWithPanel="True">
  <RelativePanel>
    <TextBlock x:Name="lbllocation">Install Details</TextBlock>
    <TextBlock x:Name="txtBuildDetails" Text="[build details]" RelativePanel.Below="lbllocation" RelativePanel.AlignRightWithPanel="True" RelativePanel.AlignLeftWithPanel="True" FontSize="12" Padding="2" MinHeight="12" Margin="10,0,0,0" />
    <TextBox x:Name="txtVersion" Text="[version]" RelativePanel.Below="txtBuildDetails" RelativePanel.AlignRightWithPanel="True" RelativePanel.AlignLeftWithPanel="True" FontSize="12" Padding="2" IsReadOnly="True" BorderThickness="0" Background="Transparent" MinHeight="12" Margin="10,0,0,0" />
    <TextBox x:Name="txtLocation_App" Text="[location]" RelativePanel.Below="txtVersion" RelativePanel.AlignRightWithPanel="True" RelativePanel.AlignLeftWithPanel="True" FontSize="12" Padding="2" IsReadOnly="True" BorderThickness="0" Background="Transparent" Height="20" MinHeight="12" Margin="10,0,0,0" />
    <TextBox x:Name="txtLocation_DB" Text="[location]" RelativePanel.Below="txtLocation_App" RelativePanel.AlignRightWithPanel="True" RelativePanel.AlignLeftWithPanel="True" FontSize="12" Padding="2" IsReadOnly="True" BorderThickness="0" Background="Transparent" Height="20" MinHeight="12" Margin="10,0,0,0" />
  </RelativePanel>
</Border>

The code behind:

using Windows.ApplicationModel;

private void Page_Loaded(object sender, RoutedEventArgs e)
{
  Package package = Package.Current;
  PackageId packageId = package.Id;
  PackageVersion version = packageId.Version;

  txtBuildDetails.Text = string.Format("{3} ({4}){0}Package {2}{0}Installed {1}", 
    Environment.NewLine,
    package.InstalledDate, package.Id.Name,
    package.DisplayName, package.Id.Architecture);

  txtVersion.Text = string.Format("Version {0}.{1}.{2}.{3}", version.Major, version.Minor, version.Build, version.Revision);

  txtLocation_App.Text = string.Format("Application folder {0}", package.InstalledLocation.Path);
  txtLocation_DB.Text = string.Format("Database folder {0}", ApplicationData.Current.LocalFolder.Path);


  #if DEBUG
  txtVersion.Foreground = new SolidColorBrush(Windows.UI.Colors.DarkRed);
  #endif
}

 

This code has been lifted from a working app, I haven’t done a detailed checking of the code for this post, but everything needed should be present – uses Windows 10 Anniversary Edition (10.0; Build 14393).

Using Symbols in UWP UI Controls

To use a symbol character from a font like Segoe MDL2 Assets or Segoe UI Symbol as the visible content / text of a UWP control in XAML, use the following code, where 1234 is the correct symbol code.

E.g.:

Content="&#xE15E;"

To set in C# use:

lblMySymbol.Content="\u1234;"

Also see:

  1. https://docs.microsoft.com/en-us/windows/uwp/design/style/segoe-ui-symbol-font
  2. https://www.google.co.nz/search?q=segoe+mdl2+assets+cheatsheet
  3. http://www.kreativekorp.com/charset/font.php?font=Segoe+UI+Symbol

About These Code Posts

I no longer code to earn a living, I code for pleasure.  The code snippets posted here are mainly for my own reference – and to benefit anyone who can use them.

Agile – the Rationalism vs Empiricism Epiphany

23-Mar-2018. Just an FYI that I’m currently re-drafting this article with feedback from a couple of colleagues; you can expect some significant changes at some point – all of which should make the article clearer.

I’d like to share with you an epiphany.  I run the risk of it being like a riotously funny party joke, retold later to much less effect: a case of ‘I guess you had to be there’, but that’s seldom stopped me before.

This epiphany is essentially a new way of viewing several things:

  • Waterfall vs agile, and agile vs Agile (big A, little A).
  • Why people often don’t ‘get’ software development; and probably why your developers and project managers have a fractious relationship, especially whenever estimation and deadlines are hot topics.

In short, the epiphany is based in philosophy.  First the recognition that waterfall is inherently rational, whereas agile is inherently empirical.  You then realise that as theories within philosophy they both have roots back in antiquity (Plato and so forth) and have essentially been elaborated and compared for significantly longer than waterfall and agile have been around.  Therefore, we can contemplate agile and waterfall from a purely philosophical viewpoint (without any pretense of ‘brand’ or additional baggage), and we can also draw on significant body of previous philosophical work.

I also suggest to you that there exists a rationalist-empiricist continuum on which each of us are placed – some people are naturally inclined towards one or the other.

Let’s look at this further and see what we can through this lens.

Short Philosophical Primer

Rationalism and Empiricism are both theories within Epistemology – the branch of philosophy concerned with the theory of knowledge:

  • Rationalism is the view that “regards reason as the chief source and test of knowledge”, or in other words, where “the criterion of the truth is not sensory but intellectual and deductive”.
  • Empiricism is a theory that states that knowledge comes only or primarily from sensory experience.

Philosophical literature frequently discusses one in contrast to the other.  One often also sees references to things being ‘a priori’ or ‘a posteriori’.  This isn’t something you need to remember, but it is interesting to note the concept behind these two terms as it’s relevant to our subject and subsequent contemplation:

  • A Priori is “what comes before”, as opposed to
  • a Posteriori which is “what comes after”.

The question (of whether or not something comes before or after) is significant enough that a priori and a posteriori appear in ‘Elements’ – Euclid’s 300 BCE work which served as the model for precise thinking throughout 15th-18th century Europe.

For those of you less well disposed to latin, you might also come across the terms deductive and inductive:

  • A deductive argument is where the conclusion follows from the premise, i.e. via reason.  I think, therefore I am.
  • In contrast, an inductive argument is based on experience/observation; in such cases the argument may be strongly supported by the conclusion but not necessitated by it.

It’s also interesting to note that empiricism is a key part of the scientific method.

So, key takeaway: Rationalism and Empiricism have been around and thought about for ages, and there’s lots we can draw from that.

Is Waterfall Really Rational and Agile Empirical?

If you already ‘get it’, then you can probably skip this section.

Waterfall, as it is practiced in an Information Technology related project delivery context, is a sequential approach.  Typically it consists of phases such as analyse, design, build, test, and so on.  Each phase informs the next: analysis informs us what the problem is, based on this we design a solution, based on this design we implement the solution, and so on.  Only at the end, when the solution is complete, do we see it working and finally see how well it addresses the original problem.

In other words, the basis for the solutions design and implementation is rational thought – what we can deduce.  It’s essentially theoretical.

The planning of waterfall projects is also based on rationalism; at the outset, estimations are made as to how long each phase will take, how much it will cost, who needs to be involved.  All of this happens in advance, and is frequently updated.  Should reality hit the project, the team is forced to ‘re-baseline’ – delivery schedules, resource estimates and costs are all calculated, based mainly on a rational thought process.

Agile, again as practiced in ‘IT’, is based on an empirical approach where work is done and the results evaluated before proceeding.  At the beginning of an agile project effort is spent working out an initial prioritised list of problems to be solved, a small chunk of this is then worked on by the team, the results are evaluated and corrections made – corrections in terms of the base assumptions and the work/output itself.  In this way the project proceeds, repeating a consistent cycle of delivery and inspection.

The planning of agile projects is also based on empiricism; teams typically provide relative estimates based on the size of previous work that they have already done, and in this way the estimates are evidence based.

Agile Explained

As I wrote previously, Marty Cagan has a nice simple way of asking people if they are agile:

  1. How soon can you test?
  2. Does shipping out a release mean you’re finished?

At face-value this appears fairly straight-forward, but I think we can go further:

  • Are you driven by what you have planned or what you can prove through experience?

If you primarily “know” a thing because you have deduced it then you’re not being empirical, and therefore you’re not agile.  It’s that simple.

Agile: A vs a; Big A vs Little A

It’s true that in the context of contemporary software development the concept of Agile (as in the big A, as in the Agile Manifesto) is bundled up with additional ideas: individuals over interactions, collaboration, and so on; and whilst these additions clearly add immense value and have made agile more accessible for many, they don’t alter what’s arguably the single most important foundational idea behind agile – empiricism.

Take the agile manifesto and its guiding principles, for example:

  • Individuals and interactions over processes and tools.
  • Working software over comprehensive documentation.
  • Customer collaboration over contract negotiation.
  • Business people and developers must work
    together daily throughout the project.
  • Build projects around motivated individuals.
    Give them the environment and support they need,
    and trust them to get the job done.

Many of these ideas (such as those listed above) are not mutually exclusive with waterfall; any group of people using waterfall can still choose to collaborate, be motivated and trust each other.  It’s true (in my experience at least) that we don’t often associate the above with waterfall projects – but that’s inductive, just because it’s often the case does not mean it is bound to always be the case.

Sure, waterfall is based on rationalisation, which is often accompanied by process, requiring tools – but I’d argue that often the use of those processes and tools is often driven by other factors: social norms, people hiding behind process, and so on; it is still possible to put ‘individuals and interactions over processes and tools’ and follow a rational process.

There will no doubt be many for whom these additions are in fact indivisible parts of Agile, and I think that view is fine.  My point is that one of the ways to better understand agile/Agile is to mentally remove all of these additions and conceptualise agile/Agile (and waterfall/agile) from a philosophical perspective based on Empiricism and Rationalism.

We can also apply this lens to a more recent phenomenon, the mainstream adoption of Agile as a product to make money off.  Don’t confuse agile (and the key part: learning from experience and evidence) with Agile (i.e. specific recipes, brands and elaborate constructs).

State of Mind

Thinking about social norms and human behavior brings us to my second key point: the existence of a rationalist-empiricist continuum, and the premise that everyone is somewhere on this continuum – usually closer to one end than the other.

In my view, many software developers that I have worked with are empiricists – in that they have a greater affinity to an empirical way of thinking rather than a rational one.  Conversely, many project managers I have worked with are rationalists – they have a greater affinity to an rational way of thinking.

Are these philosophies, these states of mind, mutually exclusive?  I’d say not.  For a start this article is itself based on ideas which have emerged through a combination of both.  Furthermore, I can think of many occasions when I have personally used each.

Both are clearly useful tools, and like all tools they are most useful when the correct tool is matched to the problem being faced (not the other way around).  The problem that rationalists are frequently confronted with in a software project is that the nature of software development is inherently empirical.  Or to be a little more precise, the nature of software development when applied to solving a particular real-world problem, is usually inherently empirical.

I say ‘usually’ because I cannot say for certain that all such problems are categorically empirical.  If we stretch software out to encompass IT, and therefore hardware, rationalism becomes more relevant.

We must acknowledge that for a lot of people, especially the senior executive who’s paying the bill, the basic question of “when will it be ready and how much will it cost” is not unreasonable.  The challenge is how to best answer it – in this regard both Empiricism and Rationalism have their unique challenges.

 

 

Customer Inspired; Technology Enabled – Product Tank Wellington MeetUp with Marty Cagan

The Product Tank Wellington meetup ran a really cool session recently called “Customer Inspired; Technology Enabled” with internationally recognised product guru Marty Cagan.

As you can imagine, someone of Marty’s calibre provides a lot of great wisdom.  Some reinforced or reinvigorated stuff I think I already knew, but much was also new.

Here are my old-school hand-scribbled notes (2 pages) if you’re interested (or neglected to take your own, tsk tsk): Customer Inspired, Technology Enabled with Marty Cagan – 12-Feb-2018 – Adrians notes

Note to anyone doing architecture: broadly speaking, anywhere it says “product” I think we can swap with “solution”.  Which is why I’ve tagged this #ArchitectureInTransformation – architects need to (at least) be mindful of this stuff.

Also, in this context  when we talk about “product” we mean a technical product of some kind (i.e. software/technology related) – not something like floor polish or mint scented vacuum bags.

Key Takeaways and Gems

Asking customers what they want

If you’re looking for where to take your product, the short answer is “don’t”.  Instead, invest your time in asking customers about their problems.

You should not rely on customers to tell you about which direction to take your product, or what new features or capabilities to add, because:

  1. They don’t know what’s possible – they generally aren’t technologists. (The clever technologists should be the people on your team).
  2. Great (new) ideas have to be discovered.  For me personally, Marty was making a strong connection to empiricism – in that you can’t rationalize your way to a “new” idea.

The way to flip the question is to be to ask your customers about things that they do know about: their problem, their constraints.

Another reason why you can’t reliably ask customers what they want is because they themselves don’t actually know what they want until after they’ve seen it.

Engineers

Marty spoke repeatedly and at length about the importance of involving engineers in the product process.  He cited several cases where new successful products had emerged from the techies – essentially from random ideas they had on the fringes of a project, where their inventiveness (based on their deep understanding of the technology) led to something entirely new.

He suggested giving developers time for discovery – something in the ballpark of half an hour a day.

Overall his message was clear:

  1. Work with strong engineers that are passionate about your vision.
  2. Do not shelter them – expose them to the full business context; expose them to customers.
  3. Provide them with constraints, not requirements.

Requirements First?

Speaking of requirements (whilst talking about agile) he neatly flipped the old Analyse > Design > Build model around:

  1. Knowledge of the technology…
  2. > enables design…
  3. > drives desires/needs/requirements

Essentially this comes back to the same point posed by “asking customers what they want” – if customers don’t know what is possible then the requirements will always fail to get the most out of what the technology is capable of.

Are you Agile?  Really.

I had to laugh – Marty’s position on Agile was that it’s a no brainer, like why are people even asking this question.  And it wasn’t just the words that gave me a wry grin, it was also his tone: dry, cuttingly sardonic, with a hint of tactful incredulity and thinly veiled loathing.

Point is, there’s a difference between thinking you’re agile and being agile.  Try these two refreshingly straightforward questions:

  1. How soon can you test?
  2. Does shipping out a release mean you’re finished?

The correct answer to #1 is that if testing is done at the end, it’s too late; if you’re agile you’re testing as early as possible and not just at the end.  If you only test at the end, then that’s where you are putting all the risk.

#2 Is a really key one; it’s about the difference between releasing something and solving a problem.  The common misconception is that when you’ve put out a release, you’re done; but whilst getting stuff delivered is great, you’re only actually “finished” if you’ve solved the problem you set out to solve.  Shipping out a release merely gives you an opportunity to see if you’ve really solved it.

So, if you iterate – great; iterate, test and keep shipping until your target problem is solved.

Roadmaps

Much of Marty’s talk sounded like heresy… in that it would certainly sound blasphemous to many people I can think of.  His discussion on product roadmaps was no exception.

Roadmaps tend to assume that 100% of the ideas on them are good ideas.

The reality is somewhat different.  Marty cited Google: in their experience, for every 10 ideas they have (on a roadmap) only 1 tends to pan out.

Bad use of roadmaps relate back to the second point in “are you agile?” – in that people sometimes confuse delivery with completion.  People walk around with roadmaps and release schedules and focusing on getting stuff delivered.

So if that’s all wrong, what does right look like?

Essentially it comes back to having a strong product vision.

My notes on this part of the talk are scarce – a sign that I was either too deeply engrossed to write, or I agreed with what he said and felt no need to note the obvious.

In either case, the key takeaway for roadmaps takes heavily from the points above: focusing on the product vision – which I think we can safely extrapolate to:

  1. Understanding the customer and their problem.
  2. Giving your teams constraints and time to come-up with the unexpected.
  3. Iterating until solved, not just shipped.

Roadmaps and Agile

From a philosophical perspective, roadmaps are rational – they plan out what is to happen; whereas agile is empirical – it learns from what has happened.

Roadmaps attempt to answer the fundamental questions: how much will it cost, and when will we get it?  And as Marty acknowledges – that’s not an unreasonable thing to want to know.

Agile can answer these questions but only once you’ve done enough work, to provide enough meaningful experience, on which to base a forecast.  Marty elaborated on that theme in terms of a “high-integrity commitment”.  I don’t have any notes on that so allow me to refer you to Marty’s blog:

Teams

  • Measure teams as a whole; not in terms of “functional” teams, but product (solution) teams.  (What does this mean?  Think about the difference between “shipping” and “solving” and you’re pretty much there).
  • Provide teams with a competent and confident product manager.

Product Managers

The final subject I want to cover is around product managers – specifically good ones; it’s important to me because it’s highly relevant to what I see as a architect in solution-architect / domain-architect / enterprise-architect / consultant space.

Marty placed a lot of emphasis on the importance of having a good product manager.  For him the product manager is like the “CEO of the product”, where CEO refers to the calibre of the person in that role, and because good CEO’s know all the elements of their business.

Product managers need to be smart, creative and persistent.

The product manager should must have a deep understanding of:

  1. The customer(s).
  2. Industry trends.
  3. How your business works.

The reason you want a good product manager is because this is type of invaluable knowledge and wisdom they’ll bring to your team, and to your product.