The 7 Deadly Sins of Developer Experience with Cristiano Betta #APIDAYSAU

One of the sessions I enjoyed the most at API Days 2018 was Cristiano Betta’s talk on Developer Experience (DX), i.e. how to more effectively engage with developers who are consuming your API’s.  The learnings go beyond developer onboarding specifically, and are applicable to product development in general – which is partially why it was so cool.

Slides here: https://betta.io/blog/2017/11/10/the-seven-sins-of-developer-experience

I also caught-up briefly with Cristiano afterwards where he expanded on a couple of points, as the talk he gave was a slightly shorter version of a longer talk.

An overarching theme was reducing cognitive load through the use of fundamental design principles.  The deadly sins he covered were mainly around information:

  1. Too much
  2. Too soon
  3. Too little
  4. Unstructured
  5. Unsupported
  6. Incomplete

…with “no control” over tools as #7.

There was a variety of points of interest that I noted down, which I’ll briefly cover, but the things that really grabbed my interest were:

  • “Too little / too late”, which is effectively about taking a holistic approach.
  • The idea of measuring and responding to developer friction.

Note – the focus of Cristiano’s talk is around the developer experience in terms of on-boarding rather than the API design itself – for more info on that (developer experience in terms of API design) you might want to check out something like APIs You Won’t Hate.

Too Little, Too Late

This is partially about documentation – but not in the sense of manuals, it’s more about providing enough information when it is needed.  Case in point: resolving errors.

The example Chistiano gave was when a developer is making a call to your API (probably the for the first time) and they encounter an error – e.g. related to input.  Let us say they call your API which provides this response:

{
  "error": "000123 - Invalid input"
}

What you want to avoid is the situation where the developer needs to resort to internet searching.  Sure, you might have it covered in your help documentation:

Developer Guide – Error Codes – Page 421

Error 00123 – Invalid input.  Occurs when you use a boolean on a Friday, on Friday you must use an int: 0 = false and 1 = true.

Your problem is that developers will already have formed familiar techniques for dealing with issues like this, probably using online resources – resources they are familiar with, and which through habit present them with a relatively low cognitive load.

There are many reasons why this is bad: you have no control over the experience, how long it will take and how frustrated they will get – not to forget “OMG, I can’t believe they don’t just say that” and “why is this so unnecessarily hard” comments all over  StackOverflow.com.

What’s The solution?

A better way to do it is to include useful information in the responses error message itself:

{
  "error": "000123 - Invalid input. 
    Occurs when you use a boolean on a Friday, on Friday 
    you must use an int: 0 = false and 1 = true."
}

Yes, you can also have this information in your developer guide.  The trick is including the relevant information when it’s needed; not too much, not too little, and just at the right time.  This leads on nicely to another cool concept…

Developer Friction

Adrian Trenaman’s QCon NY 2017 presentation on Developer Experience included the idea of minimising “the distance between ‘hello, world’ and production”.  In that context he was discussing development in a holistic sense (tooling, environment, and so on) where you are employing developers, but as Cristiano explained to me, you can also look at “developer friction” in the context of developer adoption of your APIs.

In this context, developer friction is effectively the amount of time between (a) making an API call that errors and (b) the first successful call to the same API – or some meaningful variation along those lines, such as the time between developer registration and their first successful API call.

So, imagine that you have 10 developers a day signing up to your API and making their first ‘hello world’ call.  Let’s say 50% of them get an error the very first time they make a an API call, and on average 90% of those developers are able to make a successful call within 2 minutes.  Now compare that to a situation where 80% get an error the first time, and of those 90% take on average 2 hours to make a successful call.  Clearly the second situation has much higher developer friction.

According to Cristiano, some organisations use techniques like this to monitor adoption of their APIs and specifically to help them identify areas where their overall developer experience may need improvement.

Other Gems

I won’t go into these concepts in much detail, and hopefully you should be at least aware of them already – if not I kindly (and strongly) suggest you check them out.  Cristiano’s slide-deck is a great place to start.  It covers a lot more than what I have included here.

Cognitive Load, Overload and Progressive Disclosure

Cognitive load refers to the effort being used in the working memory; cognitive overload is where (for example) a learner is unable to simultaneously process a certain amount of information or tasks.  Solutions to this include:

  • Chunking information up, e.g. into lists of about 8 items, with a useful heading.
  • Applying the 80/20 rule, e.g. call out the small number (~20%) of items that developers are most likely going to be seeking, especially if they are new to your platform, and leave the other 80% accessible but through other navigational means.

That second point is an example of Progressive Disclosure, a great technique for managing cognitive load, covered in detail in the book “Universal Principles of Design”.

Another really interesting pitfall around cognitive load was around asking people question, like on sign-up forms:

sins-062-98658263

As Cristiano explained, this may look simple but it raises a lot of questions in peoples heads – questions which might not seem a big deal to you but can be problematic for others (especially if it’s mandatory):

  1. Who will see this?
  2. Can I change it later?
  3. What do you need it for?

These 3 simple question really resonated with me, and they provide a simple checklist you should consider when reviewing questions you ask your customers.  I know from firsthand experience that questions like this, in some circumstances, often force me to stop and think way more than should be necessary.

Tools Out of Control

This is where community tools and SDK’s are more obvious than yours.  Unfortunately Cristiano didn’t have time to go into this in a lot of depth in terms of solutions, but clearly SDK’s and other tools are a integral part of your offering, and a critical part of DX; therefore it’s critical to have a plan in place for managing these as part of your product.

This is most likely going to include monitoring the community – where they are; understanding what tools they want; staying engaged.

Using Structure

Another nice 3 point list was around structure – i.e. allowing people to navigate through the information you provide them:

  1. Where am I?
  2. Where can I go?
  3. Where did I come from?

Telling a Story

Whilst having information in inherently useful structures is good, you can augment this in key situations (such as developer onboarding) with Story Telling – another technique covered in the Universal Principles of Design.

Cristiano cited Pusher as an example of doing this well – the “hello world” make your first app story.  Here’s the screenshots, as you can see the path from account creation to “hello world” has been streamlined, and users can easily opt out of this if they want.

sins-120-c2bdc012

sins-121-02b36e95

sins-122-84eca6c6

References

#ApiDaysAU

Advertisement

Real World Machine Learning with Susie Sheldrick #APIDAYSAU

I’m at the API Days conference, and one of the first sessions of note was Deep Learning: Real World Applications with Susie Sheldrick, which explored some of the practical real-world challenges related machine learning, based on experience.  I also caught up with her after the session where we expanded on some of curlier questions.

Quick Context: 30 Second Intro to Machine Learning

Susie kicked off with a simple diagram that sums up what machine learning is in comparison to traditional applications:

DeepLearningSusanSheldrick

Machine learning partially turns this model on its head: the solution is able to “learn” its own rules (through training its internal rules model) at much greater scale than some person/team coding them by hand.  So, rather than feeding data and manually created rules into an solution, simply train the solution to produce its own rules.

The Chaser

This nice intro kicked off a mental train of thought for me: in practice the more complete solution probably looks something like this:

DeepLearningAdrianKearns

The end goal is still to build an solution that provides the answers users were seeking, we’re simply using machine learning to help out with the rules.

Devil in the Detail

That all sounds wonderful on paper – or in ivory-tower pixels – but, as should be no surprise, the real world is not so straightforward.

Of critical importance:

  • Understanding the problem you’re trying to solve.
  • Gathering the right data to train the model.

This is much easier said than done, it transpires that:

  • It’s all too easy to inadvertently train bias into the rules model.
  • Tracing exactly how the AI made a specific decision actually turns out to be really hard.

Whilst the second point has obvious implications for developers and testers, both points combined have massive implications for your legal teams, anyone who considers themselves ethical (like you, right?), product owners and anyone at the receiving end of a machine determined decision.

Bias

Susie gave some examples of unexpected and undesirable bias ending up in rule models, such as one experiment that determined prisoners eligibility for parole.  It turns out that the model significantly favored granting parole to white prisoners and was relatively much less favourable to prisoners of colour.   In contrast, in terms of parolees reoffending – the actual results were the exact opposite of the bias.

It turns out that the information used to train the model was “correct” but only in the sense that it faithfully transposed the bias already inherent in the legal system, against people of colour.

True Representation

A related issue isn’t so much of bias in the data, but of bias stemming from an absence of data.  Once more issues of race come to the fore; this time it was a passport application solution that told an Asian gentleman his submitted photo “did not meet our standards” because he was “asleep”.  As you might be able to guess, the model had obviously not been sufficiently trained with data that faithfully represented the entire user base, and therefore could not correctly handle non-european facial features.

Just to be crystal clear, the technology is more than capable of correctly handling a wide range of cases, nuances and subtlety – including racially based facial features.  The actual issue is the correct training of the model – meaning it’s critical to gather the right data, data that covers the entire spectrum of cases.  Not to mention testing and monitoring the behaviour of the solution.

Building an AI Solution: Custom or OOTB?

If you’re about to embark on a project that involves machine learning, one of the practical questions you’ll come up against is whether or not you can use an Out-Of-The-Box (OOTB) solution, or need to custom build something.  Susie’s discussion here was mostly in reference to the rule models specifically.  If you want a model capable of identifying cats in pictures online for you meme generator – you’re in luck, but if you need to correctly identify something more obscure, or more specific, you may have to build this model yourself.  Which is why the stuff above about bias is so important, because you’re going to have to navigate that minefield yourself.

Further Questions

Our chat after the session was very stimulating; a couple of the more curly questions that our conversation provoked were:

How to identify, and test for, unexpected bias?

The obvious ethical reaction to all of this is “great, let’s ensure we keep unwanted bias out of the model and our solution”.  What is much less obvious is how to do that.

Were the team behind the parole example conscious of the bias in that solution?  Let us assume they weren’t aware of it – in such a situation how would they (or you) identify that bias, and in addition, having established an operational solution how would you ensure none was introduced?

This is where, for me, machine learning is like a lens that amplifies human behaviours and bias.  It has the potential to help expose them, but how clearly, how soon, and at what cost?

How will your model react in the event of change over time?  I.e. if there is a fundamental shift in the (data) foundations on which the model was originally conceived and trained?

For example, Google is looking at moving back into the Chinese market, despite pulling out some years ago due to human rights concerns.  Hypothetical example: let’s assume that they have machine learning models built up, based on the data they currently have access to – i.e. does not include China’s current population of 1.3 billion.

What would happen if 1.3 billion Chinese people suddenly have access to a Google solution that is backed by a rules model that was not trained with them in mind?  Sure, Google’s data should be a fair representation of their current global user base, which will include Chinese – but wouldn’t adding 1.3 billion people potentially shift the model?  How will it react?  Will the responses it provides be biased against the new user population because hitherto they were not expected by the model?  Will the model be able to adapt over time, and if so how long will that be?

 

Please note that this post is based on rapidly scrawled notes in session and my recollection of subsequent discussions – my accuracy should be reasonable but may not be perfect.

References:

Fireside Chat with Zheng Li, VP of Product @ Raygun – Product Tank Wellington MeetUp

The Product Tank Wellington meetup ran a “fireside chat” last night with Zheng Li, who is currently VP of Product with Raygun – a Wellington-based company, currently on loan to the U.S.

The conversation covered her career path to Product via UX, advertising, championing women in tech and passion for business, as well as delving into specific topics with being a product person.

Here are the key takeaways I jotted down, which I’ve tried to organise by topic…

Career Path

Zheng gave us a neat little story about how she started out (in a sense): a classic tale of taking something that nobody else wanted to do and absolutely nailing it.

The task was designing banner ads for TradeMe.  She obviously attacked her self-imposed challenge with passion and drive (significant keys to success on their own), but I also noted that:

  • She formed a loose multidisciplinary team which (I think) included people with knowledge and access to data analytics and marketing folks.
  • Was data driven – each time she/they ran a new design, they would analyse the data to see what was working and what wasn’t, and think about why that was the case.

The other factor which she used to her advantage was being able to iterate at an appropriate speed – which was obviously supported by the data she and her team had access to.

Some pretty obvious takeaways there, a key one for me would be about being data driven / enabled > implication: you need to have the data.  As a data architect colleague of mine once said: before doing any data design, you must first think about what questions you will want to ask your data.

Other stand-out points around career path included:

  1. Turning weaknesses into strengths, by using them as differentiators.  The context for this was around credibility.
  2. Follow your passion.  Zheng laughed in response to a question – someone asked something which inferred she had planned her career out; she said that in retrospect her career may look like it was planned but the reality at the time was anything but.  Her response to challenges was to consciously seek out ways of addressing these – which in her case frequently included training courses, which she collectively found effective (I think for one particular area she did 7 different courses).
  3. People want to work with people they like and trust.  Zheng spoke of this in reference to relationships between companies, but it’s obvious from her perspective that this is based on interpersonal rapport.  It’s not hard to see this concept also applying at a personal career level – something I can attest to having also experienced it first-hand.

Another key career theme Zheng had was based on “that venn diagram” – meaning the three overlapping lenses in Design Thinking which cover business/viability, technology/feasibility and people/desirability.  The specific terms she used might have been a little different, but for me the connection was pretty clear.

Her basic advice was to become proficient and confident in any two of these lenses; although that seemed to be somewhat tempered with her other guiding principle of being customer focused – which suggests the business/viability and people/desirability lenses.

“Product” Means Being Close to Customers

This was one of Zheng’s key themes.  Part of this was getting out and talking to customers, which is critical.

It was interesting to hear of her experiences using product “management” (my term, not hers – can’t recall exactly what she called it) as a selling tool.  The basis for this was:

  1. Selling the value of the product, not the product.
  2. Establishing a 1-on-1 rapport with people, and understanding what kept them up at night.
  3. Taking the time to really understand that problem from different angles.

As far as point #3 goes, that meant engaging with different people in the organisation to understand the problem from their perspective: technical, marketing, sales, etc; this obviously links back to the three lenses of design thinking mentioned above, and being close to customers – all good sensible product management stuff.

We can also expand this theme out “customers” to “people”.  In her experience, product management is more about being people-based than technology based (this was mentioned in reference to a technical product for developers).

There was also a leadership angle: for her leadership was about aligning the purpose of her staff to the purpose of her business.  The implication here is to talk with the people on your team and really understand what drives them and where they want to go with their career.

A Quick Note on Persuasion

If you want to persuade someone (such as your product manager – if you’re a tech working on the product, and you have a pet feature you want to add), you need to two things:

  1. Speak in the language of the audience.
  2. Back it up with data.  This could be qualitative such as customer feedback, or quantitative data showing conversion rates.

Producty Bits

Dealing with Product Debt

Something I really liked was how she addressed debt – debt in the sense of technical debt, and even marketing debt, and so on: things which worked but could work better and had gotten to the point that they were affecting the bigger picture.  She referred to it (I think) as the “99 issues” or “99 problems” story.

  1. They got all the issues and logged them into Jira – meaning that they got it all out into the open.  Not just development/technical debt, everything.
  2. Presumably some sort of sizing and prioritisation work took place.
  3. They then knocked off a number of the items, reducing the overall debt.

The way she spoke seemed to indicate this was an annual event – which didn’t happen every year.  Bit of a spring-clean, I guess.  Zheng didn’t call it out specifically but based on her other comments I presume space in the teams capacity / product roadmap was allocated to this work.

Another interesting idea which occurred to me as she described this was the technique that Agile / Scrum teams sometimes use, whereby they adopt a sprint goal – something non-deliverable – that they want to improve during the course of the sprint/iteration/timebox.  Zheng didn’t explicitly say that was what they were doing but the idea seems relevant.  Zheng, if you ever read this I’d be interested to know if that concept was one you consciously used or were aware of.

Roadmap

Items on a roadmap (i.e. the implied promise / expectations set) should be based on two things:

  1. The teams capacity to deliver them.
  2. Evidence that a given feature is wanted by customers.

Pushing Back

Don’t be afraid to push-back.  If a customer requests a feature (for example) that  is outside your roadmap and/or ability to deliver then be wary of following the money.

This definitely fits with my experience; I tend to think that at a inter-business level or interpersonal level, the relationship needs to be built on mutual trust and respect – if the other party does not reciprocate then they’re probably not someone you want to be dealing with.

Zheng gave two examples:

  1. A major multinational effectively tried to bully their 50 wanted features on top of Zheng’s existing product roadmap – “you want our business or not”?  To have done so would have caused massive chaos within the company, affecting product delivery and so on.  Zheng counter-proposed a different approach which she and her teams could sustain.  The multinational rejected the offer and went elsewhere – only to return months later, accepting Zheng’s proposals.
  2. Another major company approached Zheng with features (she didn’t give specifics but I think we can guess their approach was more reasonable and more adaptable).  Zheng recognised that some of these features would be great differentiators for their product, so (presumably) some changes were made to the product roadmap and the featured added – in essence Zheng followed the money,  but did so because there was further advantage than just the money.

Final Thought: The Iron Triangle

At one point Zheng told an anecdote about a developer talking with her about code quality.  I forget the story but it reminded me of the the old “Iron Triangle” or project management triangle – the one that is made up of scope, quality and cost (or some similar combination; cost and time obviously being closely related).  The model effectively states that you can control any two; the implication being that if you nail people down in terms of scope and cost (or time) you have no control over quality.

I asked Zheng if she was familiar with that model and how she approached it.  Her answer wasn’t as clear-cut and direct as I would have hoped (which is not a criticism – having presented publicly I know how hard it is to provide an off-the-cuff answer that is cohesive and concise), but seemed to boil down to this:

  1. Her first substantive reaction was to discuss scope and features, so I would guess that this is her first priority.  This would align with her other comments that put great importance on being close to the customer and understanding their needs.
  2. Her second substantive reaction was to discuss product roadmaps, specifically in reference to their timing and how they are used as the basis for cross-team coordination (marketing and so on), so I imagine time would be her second priority.

By default this would leave quality to manage it self; but we shouldn’t forget the “spring clean” approach, whereby random items of debt (arguably involving quality) can be addressed in a structured way.

 

 

Customer Inspired; Technology Enabled – Product Tank Wellington MeetUp with Marty Cagan

The Product Tank Wellington meetup ran a really cool session recently called “Customer Inspired; Technology Enabled” with internationally recognised product guru Marty Cagan.

As you can imagine, someone of Marty’s calibre provides a lot of great wisdom.  Some reinforced or reinvigorated stuff I think I already knew, but much was also new.

Here are my old-school hand-scribbled notes (2 pages) if you’re interested (or neglected to take your own, tsk tsk): Customer Inspired, Technology Enabled with Marty Cagan – 12-Feb-2018 – Adrians notes

Note to anyone doing architecture: broadly speaking, anywhere it says “product” I think we can swap with “solution”.  Which is why I’ve tagged this #ArchitectureInTransformation – architects need to (at least) be mindful of this stuff.

Also, in this context  when we talk about “product” we mean a technical product of some kind (i.e. software/technology related) – not something like floor polish or mint scented vacuum bags.

Key Takeaways and Gems

Asking customers what they want

If you’re looking for where to take your product, the short answer is “don’t”.  Instead, invest your time in asking customers about their problems.

You should not rely on customers to tell you about which direction to take your product, or what new features or capabilities to add, because:

  1. They don’t know what’s possible – they generally aren’t technologists. (The clever technologists should be the people on your team).
  2. Great (new) ideas have to be discovered.  For me personally, Marty was making a strong connection to empiricism – in that you can’t rationalize your way to a “new” idea.

The way to flip the question is to be to ask your customers about things that they do know about: their problem, their constraints.

Another reason why you can’t reliably ask customers what they want is because they themselves don’t actually know what they want until after they’ve seen it.

Engineers

Marty spoke repeatedly and at length about the importance of involving engineers in the product process.  He cited several cases where new successful products had emerged from the techies – essentially from random ideas they had on the fringes of a project, where their inventiveness (based on their deep understanding of the technology) led to something entirely new.

He suggested giving developers time for discovery – something in the ballpark of half an hour a day.

Overall his message was clear:

  1. Work with strong engineers that are passionate about your vision.
  2. Do not shelter them – expose them to the full business context; expose them to customers.
  3. Provide them with constraints, not requirements.

Requirements First?

Speaking of requirements (whilst talking about agile) he neatly flipped the old Analyse > Design > Build model around:

  1. Knowledge of the technology…
  2. > enables design…
  3. > drives desires/needs/requirements

Essentially this comes back to the same point posed by “asking customers what they want” – if customers don’t know what is possible then the requirements will always fail to get the most out of what the technology is capable of.

Are you Agile?  Really.

I had to laugh – Marty’s position on Agile was that it’s a no brainer, like why are people even asking this question.  And it wasn’t just the words that gave me a wry grin, it was also his tone: dry, cuttingly sardonic, with a hint of tactful incredulity and thinly veiled loathing.

Point is, there’s a difference between thinking you’re agile and being agile.  Try these two refreshingly straightforward questions:

  1. How soon can you test?
  2. Does shipping out a release mean you’re finished?

The correct answer to #1 is that if testing is done at the end, it’s too late; if you’re agile you’re testing as early as possible and not just at the end.  If you only test at the end, then that’s where you are putting all the risk.

#2 Is a really key one; it’s about the difference between releasing something and solving a problem.  The common misconception is that when you’ve put out a release, you’re done; but whilst getting stuff delivered is great, you’re only actually “finished” if you’ve solved the problem you set out to solve.  Shipping out a release merely gives you an opportunity to see if you’ve really solved it.

So, if you iterate – great; iterate, test and keep shipping until your target problem is solved.

Roadmaps

Much of Marty’s talk sounded like heresy… in that it would certainly sound blasphemous to many people I can think of.  His discussion on product roadmaps was no exception.

Roadmaps tend to assume that 100% of the ideas on them are good ideas.

The reality is somewhat different.  Marty cited Google: in their experience, for every 10 ideas they have (on a roadmap) only 1 tends to pan out.

Bad use of roadmaps relate back to the second point in “are you agile?” – in that people sometimes confuse delivery with completion.  People walk around with roadmaps and release schedules and focusing on getting stuff delivered.

So if that’s all wrong, what does right look like?

Essentially it comes back to having a strong product vision.

My notes on this part of the talk are scarce – a sign that I was either too deeply engrossed to write, or I agreed with what he said and felt no need to note the obvious.

In either case, the key takeaway for roadmaps takes heavily from the points above: focusing on the product vision – which I think we can safely extrapolate to:

  1. Understanding the customer and their problem.
  2. Giving your teams constraints and time to come-up with the unexpected.
  3. Iterating until solved, not just shipped.

Roadmaps and Agile

From a philosophical perspective, roadmaps are rational – they plan out what is to happen; whereas agile is empirical – it learns from what has happened.

Roadmaps attempt to answer the fundamental questions: how much will it cost, and when will we get it?  And as Marty acknowledges – that’s not an unreasonable thing to want to know.

Agile can answer these questions but only once you’ve done enough work, to provide enough meaningful experience, on which to base a forecast.  Marty elaborated on that theme in terms of a “high-integrity commitment”.  I don’t have any notes on that so allow me to refer you to Marty’s blog:

Teams

  • Measure teams as a whole; not in terms of “functional” teams, but product (solution) teams.  (What does this mean?  Think about the difference between “shipping” and “solving” and you’re pretty much there).
  • Provide teams with a competent and confident product manager.

Product Managers

The final subject I want to cover is around product managers – specifically good ones; it’s important to me because it’s highly relevant to what I see as a architect in solution-architect / domain-architect / enterprise-architect / consultant space.

Marty placed a lot of emphasis on the importance of having a good product manager.  For him the product manager is like the “CEO of the product”, where CEO refers to the calibre of the person in that role, and because good CEO’s know all the elements of their business.

Product managers need to be smart, creative and persistent.

The product manager should must have a deep understanding of:

  1. The customer(s).
  2. Industry trends.
  3. How your business works.

The reason you want a good product manager is because this is type of invaluable knowledge and wisdom they’ll bring to your team, and to your product.

WSAF Meet-Up on the “Brand” of Architects & Architecture

For those who couldn’t make this meet-up, here’s a summary of what was discussed (or at least some of it, it was one of those organic discussions that took it’s own path, and I don’t have a lot of notes as I was too busy actively listening or blabbering making insightful contributions).

The basic question was around: how are architects perceived, and what is our “brand”?  We tried not to focus on specific types of architect too much (i.e. enterprise vs solution), although we tended to focused on solution architecture.

This raised initial discussion around:

  1. What does it mean in the context of Agile – which we decided to come back to, but then didn’t.
  2. Distinguishing between architects and architecture – the latter will always be needed regardless of who does it and what they are called.
  3. The correlation between governance and architecture – where there’s a lack of good governance there is often a lack of good architecture or appreciation of architecture in general.

This led to a significant discussion around “can we define the benefits of (solution) architecture, and the risks of not doing it”?  Whilst this is hardly a new problem it is one that we really need to put to bed.  The obvious challenge is not merely to define it, but to do so in a way that is broadly and easily understood.

We also discussed what would logically follow next – assuming you had the ideal definition, what would you do with it ?  But unfortunately the conversation took a turn and I don’t have any notes.  From memory, there weren’t any major epiphany moments arising directly from this.  Sad.

People Who Do Similar Things

The topic then came up of comparing what architects did with people who do similar things.

One of the attendees mentioned her brother, whom is effectively an architect but doesn’t like to call himself one, but unfortunately we didn’t (or weren’t able to) dig into exactly why that was.

There was also a connection made between the role of a program manager and an architect.  Personally I can see how this might be the case in terms of seniority and leadership, but in other areas the correlation is much less clear.  Perhaps it is such that in some cases a program manager takes on some architectural leadership responsibilities when there is an absence of architects or effective governance.

Later on, this broad topic came back with a comparison to service design.  The widely agreed takeaway was that architects should add this to their general toolbox, the toolbox we all have of skills and ideas that we get from various places but don’t always get to use “for real”.  Service design feels like one of those – something it’s worth knowing a bit about – just enough to be dangerous.

Focus of the Solution Architect: Technical or Business?

We discussed the focus of the solution architect role – is/should it’s focus be technical or business?  There’s no doubt SA’s need a foot in each camp, but is one aspect inherently more dominant than the other?  And because this is about brand, i.e. perception, I asked people to consider not just how they see this for themselves, but also how they think non-architects perceive it.

I asked everyone to think about a scale from 0 to 100, where 0 was all business and 100 was all technical.  I then asked them to silently (in their own heads) come up with the answers to those two questions.  I then drew the scales up on the board and invited people (without changing their minds) to put their scores up.

15-09-2017+5-08+PM+Office+Lens+(1)

As you can see, people see solution architecture as a largely technical role, and their perception of how they think others perceive it is similar but not identical.

It makes me wonder about engineering architects (people who architect buildings, etc) – do they have a similar or comparable issue with brand?  Are they perceived as being largely technical, and is this how they want to be perceived?

It Ain’t What They Call You, It’s What You Answer to

We then got on to names – what do we call ourselves.  Sadly the list wasn’t very long and we didn’t really push past the obvious, but it was an interesting enough starting discussion for a Friday afternoon.

What’s wrong with “architect” – well nothing in my book, I still think it’s a useful term, and I still often compare myself to a building architect when describing what I do to a lay-person.  But that didn’t get in the way of our discussion.

“Digital Strategist” came up, but then we realised that’s probably taken.  Later someone adroitly evolved this to “Digital Capability Landscaping”.

“Principle _ _ _ _ _ _ _ _ _ _” made it up onto the whiteboard, a brave start but leaves just a tad too much to the imagination.  Typical architect, right?  In  my notes I wrote “(domain)”, implying the name of the domain you’re a principle in is the key – but what are those domains?  perhaps it goes back to the technology vs business discussion – do you go for a technology or business domain?

Someone suggested “Technologist”, and “Solution Design Thinking”.

I’m quite proud of one I dreamt up later: “Trade-off Merchant”.

The conversation then took a turn when someone suggested we pull up Google Trends with “enterprise architect” and “design thinking”.  We then played around with other terms.  I must admit I was pleased to see solution architect is still trending upwards. What is Google Trends? see Wikipedia.

wasf - trends

Final Tidbit

Someone mentioned a neat little resource: http://openmodels.org/

“The openmodels.org website hosts the Open Model Initiative, a project to collaboratively develop enterprise reference models for everyone to copy, use, modify, and (re-)distribute in an open and public process.”

The WSAF would also like to thank Middleware NZ for hosting us and providing drinks and nibbles.

Were you an attendee?  Got anything to add to my semi-random collection of notes?  Add a comment 🙂

Design Thinking Entrée, with Blair Loveday, et al (#ITEA 2017)

Coming out of the ITEA conference, I referred to three magical signs that (IT) architecture is going through a positive transformation; #3 was around design thinking, a topic that a number of speakers covered in varying depth.  Amongst those was the heretic Blair Loveday; I mean sheesh, the guy’s a BA Chief Culture Officer, what blasphemy is this, him presenting at an architect conference?

Blair and the others spoke enough about design thinking to wet our appetites, but not nearly enough to constitute a meal; so, based on what I got from the conference and after some digging of my own, here’s my quick overview of design thinking – a sort of entrée to get you started.

In a Nutshell

I’m conscious that I’m partially trying to appeal to IT architects, let me do so by using the term ‘scientific method’, because it’s going to get a bit touchy-feely later on.

According to wikipedia, Design Thinking is comparable to the scientific method (feedback is obtained by collecting observational evidence and measurable facts) but with the addition of also considering the human aspect, or emotional state.  The inclusion of the human dimension is a key theme that you’ll find throughout design thinking literature.  For example, use of empathy is one of the specific techniques suggested for during the ‘learn from people’ phase.

The Three Lenses

One of the concepts Blair used to describe design thinking was a “the three “lenses” that you’ll see repeated in design thinking literature: desirability, feasibility and viability.

DesignThinking3

Blair respectively described these people, business and technology lenses, and then talked about then in the context of innovation – by type, depending on which lenses overlapped.  The central overlap of all three was “Experience Innovation”, which sounds fine but with all due respect is just a smidge too touchy-feely, even for me, on Monday – but that doesn’t matter.

Design thinking, according to Blair, is centered on the people / desirability lense – which is in keeping with Wikipedia’s view vis-à-vis the scientific method.  You’ll notice that this puts emphasis on emotional and functional innovation.

The thing I really like about this model is that it’s simple yet useful, calls out the human part (which is pretty essential) and provides one of several good anchors to understanding design thinking: making stuff that “is cool” or “just works”, for people.

Getting to Grips: Four Into One

Trying to understand design thinking by reading about it online is a little like talking to people who witnessed a traffic collision: everyone’s got a slightly different view.  Given how long design thinking has been around that’s probably not surprising.  I found a number of approaches.  What I’ve done below is to try and distil the main phases from the 4 interpretations that I studied:

DesignThinking4

The colour groupings are my own, looking for commonality across the different approaches, and are only indicative.  The four approaches are based on (from top to bottom): Design thinking example video (wikipedia), IDEO, Stanford University’s ‘Taking design thinking to schools initiative’ (wikipedia), and ‘A Framework for Design Thinking’ (Creativity at work).  Links to these references are below.

The third process, from Stanford, is the one Nick Malik referred to in his talk at ITEA 2017.

Chris Tuohy’s talk on experience’s at Westpac also touched on design thinking.  The Westpac approach has seven phases (not including an 8th step which seems to be a decision point at which the prototyped ideas are passed into a delivery-focused design and build lifecycle.  Unsurprisingly, these seven phases are all in common with those suggested by the 4 approaches above.

DesignThinkingWestpac

Distilled Comprehension: One from Four

Here’s my general take on what things a reasonable design thinking process should include:

DesignThinkingAK

  1. Learn from people: 
    1. IDEO seem to refer to this as “Insights”, observation, learning from extremes, interviews, immersion and empathy, and doing this all through the three lenses.
    2. Getting an idea of people’s motivations, habits and delights is a good place to start.
    3. A concept I came across more than once was the idea that people on the extremes (think bell curve) are good at helping to explain ideas that the mainstream are less able to articulate.
  2. Find patterns:
    1. Look at what you’re learned, try and make sense of it.
    2. Look for themes, apply intuition.
    3. Put yourself in the shoes of the users, leverage empathy.
    4. Distil design principles.  For example ask the “how might we” question: if a design principle or theme says “x” ask how you might turn that into a specific idea or prototype (and remember the three lenses).
  3. Generate ideas: 
    1. Don’t prequalify ideas out, just generate them.
    2. As IDEO say, “Push past the obvious”.
    3. The emphasis is on creativity.  Basically this is the divergent thinking (creating choices) phase.
  4. Make tangible, prototype and test: 
    1. This is the complementary convergent thinking phase (making choices).
    2. Make things tangible and real through prototyping.  Use any method you like, but make sure it’s using something that will resonate with your audience.
    3. Refine and improve.

Some of the descriptions include steps that come after what I would consider to be the core of design thinking (e.g. delivery).  I don’t think that’s necessarily bad.

You could say these were “steps”, implying a formal process; obviously you want to take time to understand before you race off and prototype stuff, but to put constraints that are too formal on the approach would do it a disservice.  For example, the phase of generating ideas and prototyping them could (and even should) be an iterative process.  After all: how many iterations = how long is a piece of string.

Finally, I got hold of Peter G. Rowe’s book “Design Thinking” from the Wellington public library; I haven’t made serious in-roads yet, but it looks interesting.  One of the things about it I am keen to explore the author’s views it given he’s coming from the perspective of a “real” architect (i.e. buildings, not IT).

Further online reading and viewing:

  • Wikipedia – check out the example video, it’s a really nice little summary.
  • IDEO – some useful content for sure, but not a lot of it (unless I did a “man-look”).
  • Creativity at Work

3 Ideas from Nick Malik on Design Thinking (#ITEA 2017)

Following the 2017 ITEA conference, I recently reiterated what many of us have known for a while: that traditional architecture and architects are endangered.  I also promised to share some of the great ideas from that conference – practical concepts that you can use right now, and which started to demonstrate how architects can still be relevant and add value.

I’d like to start with ideas from a really valuable talk given by Nick Malik, a 37 year industry veteran who describes himself as a “Vanguard Enterprise Architect, Digital Transformation Strategist, Author, Blogger, and General Troublemaker”, currently Senior “Principal Consultant – Enterprise Architecture” with Infosys.

The subject of Nick’s talk was “Using Design Thinking to Develop your Enterprise Architecture Core Diagram“.  In this post I’ll briefly introduce this key concept as well as some of the other ideas that I wrote down during Nicks talk.

#1 – Actually Understand the problem

The first thing I wrote down was incredibly obvious and shouldn’t need reiteration: taking sufficient time to actually understand the problem.  Nick emphasised bringing people into this process – actually talking to people to really understand what they need, so that we “build solutions that people want to use”.

The quote that came to mind during this bit of the presentation was Eisenhower’s “Plans are nothing, planning is everything.”  Why did I think that?  Well, some people will equate “understanding the problem” with analysis and documentation, where the scale of the analysis and documentation corresponds to the perceived scale and complexity of the problem.

But that’s not what was meant – it’s more around the quality of the discussion, and ensuring that there is real understanding of what the problem is, and what is needed.

In my view, the challenge here for some people (and architects) is that doing this well requires quality interpersonal engagement.  I wonder how often we end-up with solutions that are system-centric rather than people-centric?  I suspect it’s partly due to that fact that some of this stuff is hard – it’s easy to let the technology control you.  But I also think there’s another aspect to it – that some people who are good with systems & tech aren’t always as confident with people, and so the people-centric part loses out.

Interestingly, the design thinking page on wikipedia contrasts design thinking with the scientific method; whilst both approaches use iteration, design thinking consciously “considers the consumer’s emotional state”.  Having quality discussions with people doesn’t necessarily equate to discussing emotional state, but even so, I think that the organic relationship between these concepts is apparent, as is their relevance to arriving at better and more holistic solutions.

So, focus more on having quality engagement with people and taking the time to understand.

#2 – The Core Diagram and Design Thinking

The heart of Nick’s talk was the Core Diagram, and using Design Thinking as a way to developing it.  The crucial idea I took from this was connecting the existing and accepted (although possibly under-utilized) architectural concept (the core diagram) with the “modern” technique (design thinking) which has become somewhat hijacked by a market that is “going digital”.

I say “modern” with the slightly sarcastic quote marks because the roots of design thinking actually go back a long way before it became vogue in the current “digital” era. That said, “digital” is relevant to architects because it’s the current language of business, and those not conversant in it risk being marginalised, regardless of what people think digital means.

Before I go too much further I just want to point out that I am new to the concept of the core diagram – at least regarding the specifics of the concept as Nick describes it.  My goal here is simply to help spread the word on this as a idea, because I think it has value.

Nick has been writing about core diagrams for some time (circa 2012), and I wonder how much the approach to developing them have changed?  I haven’t yet properly read and digested the original approach, but it’s now 2017 and Nick is connecting the development of core diagrams with design thinking – I’m not sure whether this represents a fundamental shift in the approach, or a natural evolution that recognizes shared principles that were always inherently there.

The reason I mention this is that if you go searching online you’re going to find articles from a few years ago (c’mon, 2012 isn’t that long ago) , and you might (incorrectly) feel compelled to dismiss them out-of-hand as not being contemporary and not solidly connected to “design thinking” as is currently vogue.

So, what’s a core diagram?

As with a lot of good ideas the key concept is relatively simple, according to Jeanne Ross (Director, MIT CISR):

“For most companies, I think some kind of picture is essential for understanding the expectations for a business transformation.”

The bold is mine.  Nick included this quote in his deck – having taken it from an email Jeanne sent him in 2011.  Nick described it as “the best advice we all ignored”.

Actually Nick, I think I might have a tongue-in-cheek explanation for that – there’s currently no wikipedia page for Core Diagram 😛

Jeanne describes it as:

“a simple one-page view of the processes, data, and technologies constituting the desired foundation for execution.”

One-page is key.  What you’re after is something that everyone wants to put up on the wall, in their office or the teams shared space.  You want it to support a wide range of discussions and thinking across all your stakeholders – especially those who are responsible for, or have a lot of influence over, the end result.

Here’s some links for you:

  • Enterprise Architecture As Strategy” by Jeanne W. Ross, Peter Weill and David Robertson, on Amazon.
  • What is a core diagram?” MSDN blog post by Nick Malik, 2012.
  • (Slides from) Open Group Presentation on MSBI method of creating Enterprise Architecture Core Diagrams on slideShare, 2012.

A Brief aside info Marketecture

As Nick was describing the core diagram I couldn’t help but mentally connect it with Marketecture and effective marketecture diagrams.  In Nick’s view they aren’t the same thing, and I can see why he says that – but it’s subtle, multi-dimensional, and I’m still thinking about it.

I’ve previously found a number of useful definitions that help capture what I think marketecture is (which I sketch out in “Appendix: The Mysteries of Marketecture” in this post).  In summary it’s:

a business perspective, including concepts such as licensing, the business model and technical details relevant to the customer; it can also serve as an informal depiction of the systems structure, interactions and relationships that espouse the philosophy behind the architecture.

We had a very brief discussion whilst walking out at the break, Nick’s view was (and assuming my recall is accurate) that marketecture is designed to assist the “sale” of the solution, with the underlying implication that relates to the “transactional” nature of the sale; where as “you can take a core diagram to governance meetings”.

I guess it depends on what is meant by “sale” – there’s the commercial sense i.e. trying to sell faster processors to end users, but there’s also the idea of “selling” a solution as being viable to executives and governance bodies.  From a philosophical stand-point I think good marketecture and core diagrams have that in common.  There’s no doubt a lot more to explore here.

 

#3 – Ideation Techniques

Design thinking, and the concept of rapidly coming up with ideas deserves more time and space than I can give it here, so to get you started, let me just give you a couple of the ideas Nick shared:

  • Reverse Brainstorming – Instead of asking, “How do I prevent this problem?” ask, “How could I cause the problem?”  The idea is that by initially focusing more on the problem you’re then better equipped to start considering solutions.  It reminds me of the 37 Signals piece called “Have an Enemy”: “Sometimes the best way to know what your app should be is to know what it shouldn’t be. Figure out your app’s enemy and you’ll shine a light on where you need to go.
  • SCAMPER – an acronym of activity based thinking process which help you think out of the box: Substitute, Combine, and so on.  It’s been around since 1953.

 

#ArchitectureInTransformation