Six Boxes Performance Thinking https://www.sixboxes.com/ Welcome to the Six Boxes RSS feed. Sat, 27 Apr 2024 22:18:25 PDT en-us <![CDATA[ OKRs: Old Idea, Still Poorly Defined – A Big “New" Thing ]]>  

OKRs (Objectives and Key Results) seem to be the new flavor of the year for management and goal-setting. Recently, many articles, social media posts, and colleagues have been communicating about OKRs.  OKRs are "hot."

This framework started to emerge when Peter Drucker, the often-cited management consultant, introduced Management by Objectives (MBO) in 1954.  In the 1970s, Andy Grove, CEO of Intel at the time, improved MBO by introducing OKRs. In 1999, John Doerr, the renowned venture capitalist, introduced OKRs to Google. And since 2010, inspired by Google's example, many companies have been adopting OKRs. These days, OKRs seem to be the “new” thing in goal-setting and performance management.

But what is a “good” OKR, and how should we create them? Currently, many leaders, managers, and consultants are doing what they always do with “the next big thing" – recommending OKRs as a near panacea for management issues. There are training and certification programs for OKRs. If you look for guidelines for defining them, you often find the suggestion that they be similar to SMART goals (an acronym that, according to Smart As Hell guru Glenn Hughes has more than 100 different meanings). Perhaps the key elements are Specific and Measurable. One way to think about it is that we want goals to be specific enough so we can measure them.

Trained as a behavior scientist, I tend to be critical of many management and talent development tools because they are often too loosely defined, too abstract, or too open to interpretation. In behavior science – and I would say in performance engineering as well – we need clear definitions of the things we hope to improve so we can measure them.

The extreme case of widely adopted but unmeasurable descriptions of performance for performance management is competencies, which are abstract category names for clusters of behavior that can vary tremendously from one example to another.  Communication effectiveness as a competency, for example, looks quite different if it involves a tough negotiation with a potential business partner, selling a product to an interested prospect, or having a 1:1 conversation with a direct report. There are few if any similarities among the behavior needed in these situations except at the most abstract level. And yet, they all get clustered under a single label. Rating people on competencies is a form of refined opinion, not of objective measurement.

While OKRs are certainly better than competencies as a way to set goals, to measure performance, and to monitor progress toward goals, competency modeling sets a very low bar with respect to effectiveness. 

So, let’s look at OKRs a little more closely.

Describing Objectives in OKRs

An OKR is defined as an objective combined with a short list of key results used to measure progress toward the objective. Sounds good, but what does that really mean? If we search the Internet, we see a wide range of examples.  Most examples of objectives involve verbs, including:

  • Increase efficiency of the QR process
  • Delight customers
  • Enhance security across the enterprise
  • Deliver amazing customer support

Certainly, these statements of objectives are aspirational, and maybe even inspirational for some.  But until we know how to measure them, they don’t really mean anything. One might ask, “How long are you going to increase, delight, enhance, or deliver?”  In other words, when do you know you’re done, or successful?

That’s the problem with objectives stated as activity. They’re like having a meeting (an activity) that does not produce anything of value (e.g., decisions, plans, agreements, documents, etc.), but merely continues at the next meeting. It is not really a defined objective. At best, it’s a vaguely defined strategy or tactic intended to achieve an objective.  That is why we need the second part of the OKRKey Results – to define the value of the activity described by the objective. We need something to anchor the objective in an outcome that is definite, measurable, and an indicator of success. 

Identifying Key Results

Here are some examples of "good" Key Results found on the Internet:

  • Maintain a 100% service level for critical major consumables
  • Redesign order process
  • Increase Net Promoter Score (NPS) from 6 to 9
  • Conduct a security assessment of code base using automated tools

Interestingly, these all seem to have verbs, as well – indicating an activity. When aggregate measures are used, such as Net Promoter Score (NPS), the result does not give me anything more specific to produce or achieve than an aspirational goal.

One might ask, what do we need to accomplish to increase the NPS?  What must we increase in order to increase the NPS to 9?  More customers who rate us 9 or 10 on the NPS scale? We can count them, much as we can count the number of people who rate a product at each level in reviews on the Amazon web site.  By identifying the countable accomplishment, we have made measurement straightforward.

Redesign order process, like some of the typical descriptions of objectives, begs the question, “How do I know when I’m done?” Some redesign projects result in worse outcomes rather than better ones. Might one at least suggest a cycle time or quality outcome to define the desired result?

The point here is not to attack these specific examples. There are examples all over the Internet, and they vary in all kinds of ways. That is because there is a lack of clear definition in the OKR guidelines about what constitutes an objective or a result.

Clarifying Objectives and Results Based on Performance Engineering

Let’s consider another approach for setting objectives and defining results with respect to performance, one grounded in the models and logic of Performance Thinking.

Performance Thinking® is a design engineering discipline. As with any form of engineering, it applies basic science. We know from behavior science that we can identify and count or tally specific forms of behavior, such as asking questions, writing messages, or assembling widgets. The common features of pinpointing behavior are to use a verb (action word) describing an action that can be repeated, and thus is countable. The underlying methodology for monitoring behavior in Skinner’s behavior science is to count instances of behavior and calculate rates of response, i.e., how many instances occur per unit time.

Extending to performance engineering, as defined by Thomas F. Gilbert, we know that the value delivered by behavior is not in the behavior itself, but in the products of behavior, i.e., widgets, decisions, documents, relationships, plans, agreements, prototypes, machines ready to go, etc.  In fact, behavior itself is generally costly in that it costs money, time and attention to enable people to behave on the job.

From a performance engineering perspective, we want to know what of value does the behavior produce, and how does the value of that product compare with the cost to produce it. This defines the second element of performance, the accomplishment or valuable product of behavior.  At The Performance Thinking Network, we call them work outputs. The definition of an accomplishment or work output is a product of behavior, expressed as a noun describing a countable thing not an activity. And to be valuable, it needs to contribute to organizational or societal results.

That, in the Performance Thinking framework, is the third element of performance: the results that define the success of a whole organization, or perhaps of a society. Organizational results might include revenue, profit, customer satisfaction, operational efficiency, safety, employee engagement, quality, employee retention, and so on. These are usually aggregate results contributed by many accomplishments, and they define the very success of the organization. While there might be intermediate operational results such as cycle time or sales efficiency, what we’re looking for here are the results that senior executives, business analysts, or owners equate with the success of the whole organization. These results are usually, in some form on their "dashboards."  Societal results are of a similar nature, but generally reflect a larger scope, such as a nation, mankind, or the planet as a whole. In any case, these are results that define success for the overall entity.

If there is no credible link from an accomplishment to overall results, direct or indirect, then that accomplishment may not be not worth spending the time, money, and resources to produce.

The Performance Chain model sums up these three elements, with definitions of each. We have found that when managers, leaders, and performance improvement professionals are precise about these definitions, their work can be far more effective because everyone knows what they are talking about. They can easily set goals and objectives, measure them, and know when they are achieved. We have also seen the implications of an accomplishment-based approach in other aspects of business and human performance management, as in our discussions with strategic planning master, Dr. Peter Dams, about making strategic plans more executable.

MBO, KPI, OKR, and SMART:  Murky without Clear Definitions of the Elements

Whether we are coaches, leaders, or performance professionals, we can frame conversations about goals by agreeing on what’s at stake for the whole organization. “Are we trying to improve profits, market share, customer satisfaction, or employee engagement…?”  Because knowing what we’re trying to achieve for the organization or society as a whole gives us both motivation and a north star to guide decision-making. We know that improving one or a few accomplishments might not move the needle on organizational results. But, we can usually determine fairly readily if a given accomplishment is likely to contribute to those results. We ask what’s at stake because that will shape how we proceed, influencing our decisions and priorities.

Notice that measures of organizational results are usually lagging indicators. That is, we don’t obtain data points very often needed to make data-based decisions. We might get organizational results measures once a month or once a quarter. We always want to be sure that a given increased value is not just an “up bounce” of variability that will go down next time, or the continuation of a trend that was already happening before we intervened.  In most cases, we need a minimum of 5 to 7 data points to be confident about trends and variability, and to thereby make truly informed decisions.

When setting goals, we can examine whatever area of performance we are discussing – whether for an individual, a team, a process, a business unit, etc. – and determine which important accomplishments comprise the value delivered in that domain, contributing to organizational results. These are countable things. We can define entire jobs or processes based on the accomplishments they deliver, and then select those of interest to incorporate into our goals. For an individual, it might be an accomplishment that needs improvement, one that is required for a next big project, or is the next step in a career ladder. For a process or a team, it might be the accomplishments that are holding us back, or the ones most critical to overall outcomes. For a business unit, it will almost always be accomplishments that will advance the organization’s results in a big way (e.g., a new team, a prototype product, new ideas about safety practices, good relationships with suppliers and customers, etc.).

Once we know the desired accomplishments, and “what good looks like” (agreed-upon, unambiguous criteria or standards that define when we have a good instance of the accomplishment), we can look in as much detail is needed at the activity or behavior needed to produce a given accomplishment.

It’s important to note, particularly given how often OKRs seem to include verbs to describe activity, that behavior for producing the accomplishment is neither the goal nor the result. Instead, behavior is how we achieve a valuable accomplishment. Part of working smarter or being strategic is to figure out the best, most effective, and efficient behavior for producing those accomplishments. If we focus on the accomplishments, then top performers might invent or discover better behavior for producing them, and we can learn from our exemplary performers.

In any case, this logic has the potential for dramatically clarifying KPIs, SMART goals, OKRs, or any other descriptions of goals and the means of accomplishing them.

Simply stated, a good goal would include what’s at stake for the organization or society and what we intend to produce or achieve that will contribute to that. We would likely state the goal starting with the accomplishment.

Accomplishment-Based Goals

Examples of accomplishment-based goals include:

  • A user interface for our internal software system that user testing shows is quicker and more intuitive for employees to use, thus improving employee engagement and productivity. (With criteria based on user testing protocols for “quicker and more intuitive.”)

  • At least 20 new introductions to CFOs who need our product, leading to sales and increased revenue (with a list of criteria that define what it means to “need” our product).

  • Good working relationships with other software development team managers, to increase productivity and operational efficiency (with a list of criteria for what constitutes a “good working relationship”).

  • A new Sales Enablement team that cuts across the usual silos for how we support our salespeople, for greater ROI and operational efficiency.

In short, a better way to define a goal is to describe a thing (an accomplishment) we want to produce or achieve, define what “good” looks like, and indicate how that thing will contribute to organizational or societal results.

Implications for Measuring Performance

Notice that when it comes to measurement, we can tally or count accomplishments – either over time (e.g., introductions per week), or when the one big accomplishment is complete. And we can break accomplishments into sub-accomplishments, either parts of the larger thing or things produced at each step toward completion of the bigger thing. So accomplishments that meet criteria for "good" are easy to count.

Usually, accomplishments are easier to measure more frequently, and thus give us a better leading indicator than most organizational results measures. We can usually count accomplishments more frequently than monthly or quarterly. Thus, they give us a better indicator to monitor the impact of continuous improvement based on measured accomplishments.  Whether you espouse SMART goals, KPIs or OKRs, they will be better if they are accomplishment-based.

-       - Dr. Carl Binder

]]>
<![CDATA[ Simplifying Performance Measurement ]]>  

Professionals in learning and development, performance consulting, quality improvement, leadership, management and coaching often try to measure the impact of what they do to develop or improve performance in their organizations. Organizations decide on key performance indicators (KPIs), typically holding individuals and teams accountable for achieving goals measured with those indicators.  Everyone wants good measurement, but it is frankly quite rare in many organizations.

In the learning and development space, there has long been the levels of evaluation model, originated by Dr. Donald Kirkpatrick, whereby measurement is said to be possible in four levels:  Reaction, Learning, Behavior, and Results.  Others have suggested adding a fifth level of measurement, Return on Investment (ROI), relevant to efforts designed to develop or improve performance. 

In the world of business strategy, the Balanced Scorecard model, from business strategy thought leaders Kaplan and Norton, offers a framework for what to measure. The underlying notion is that one cannot ultimately define or measure business strategy with an exclusive focus on financial results. Instead, they argue that we should use a balanced scorecard of measures that include multiple perspectives, specifically Financial, Customer, Internal Process, and Learning and Growth. Including this broader range of measures enables one to look more holistically at the performance of an organization and its people, and can offer important insights leading to better decisions.

Looking a little more deeply, we see that these frameworks do not define specifically what to measure or how. They provide a conceptual framework, or heuristic, that one can use in the process of selecting specific measures. And, honestly, people often get it wrong and either do not measure in ways that support good decisions, or become complicated with too much data but few informative insights.

Some commonly used measures do not measure what they claim to measure, or are open to wide interpretation.  So-called smile sheets, by which participants in programs or experiences rate their satisfaction on a Likert scale (what Kirkpatrick would call a reaction measure), are not truly measurement that would be accepted by natural scientists. As one of my late mentors, Eric Haughton, often said, rating scales are refined opinion. Moreover, it is widely known that how much people enjoy or appreciate a learning experience does not predict whether or not they have learned anything or will perform well. At a minimum, if you are going to use rating scales, we suggest displaying how many people gave each level of rating, as in the customer reviews on Amazon's web site. That gives us something to count and analyze, rather than adding up the rating values and dividing by the number of ratings to obtain a meaningless number (We call it voodoo math because rating levels are categories, not numbers that can be meaningfully added, subtracted, multiplied or divided.)

Percent correct is perhaps the most damaging of all in education and training. Just because one can be accurate, at the 100% correct level, does not predict whether the person can recall what they learned, apply it, or work efficiently in distracting environments. (There is plenty of related research on the web site www.Fluency.org.)  A competent adult can, for example write answers to simple addition problems at 100-150 digits per minute, while a typical second grader might perform accurately at 20 or 30 digits per minute, too slowly to be useful in mental math or “story problems.” The time dimension makes all the difference when we measure performance, and percent correct is "blind" to time or to actual count, once we calculate the percentage.

In many organizations, so-called KPIs (key performance indicators) are constructed with formulas that some employees may not understand, yet they are held accountable for improving those measures. Often, KPIs are not counts of things, but formulas of some kind.

At The Performance Thinking Network, we stay with measures that would be accepted in the natural sciences, such as physics, chemistry, biology, or B.F. Skinner’s experimental analysis of behavior, from which our work evolved. That means we prefer to count things over time, two standard and objective dimensions of measurement that you will find in any natural science. We can count and time production of work outputs, instances of behavior, or units of business result measures.

As a framework for measurement, we use the Performance Chain. That is, we can measure organization-level business results, work outputs that meet criteria for “good,” and behavior.

Organization-level Business Results: We measure the business results that owners and investors use to assess the health of the organization as a whole. This is important for anyone on a project or monitoring an initiative or training program that is expected to contribute to the organization’s success. However, most organization-level results are lagging indicators.  In other words, we get infrequent data points over time (monthly or quarterly in many cases).  That means we do not gather enough data points to make reliable decisions very often based on these data. We should still measure business results, when we can. But recognize that we typically need at least 5 to 7 data points to be able to identify trends or understand how much variability there is in a given measure.

Work Outputs (accomplishments): These are the countable products of individuals, teams, or processes. They are the valuable contributions of human performance that help to achieve business results. Often, they are permanent products (e.g., widgets, successful proposals, good treatment session notes, etc.), thus a bit easier to count than behavior. And even when they are less tangible (e.g., decisions, relationships, people who can demonstrate the ability to do X), because they are countable, and usually happen with relatively high frequency, we can get more frequent data points (e.g., hourly, daily, weekly).  Thus, we can make decisions using these data more frequently. In other words, counts of work outputs can be leading indicators.

Behavior: We can use checklists to monitor the occurrence of different forms of behavior (e.g., as we listen to recordings of customer service representatives, or observe safety practices in dangerous environments). We can also count behavior (e.g., number of times per day a manager provides positive feedback, the number of phone calls a sales person makes per week). While measures of behavior can be very useful for feedback, and for diagnosing why individuals or teams are not producing work outputs as expected, it can be relatively expensive and time-consuming to monitor behavior. Thus, behavioral measures can be helpful leading indicators, if the behavior happens fairly often.  And measuring behavior can often help to improve performance.  However, a better choice for leading indicators, if one does not need to measure behavior, is to count work outputs that do and do not meet criteria.

When we advise participants in our certification programs about how to measure impact and make data-based decisions, we generally suggest that they first analyze the performance of interest into its components: work outputs, the behavior for producing them, and the business results to which the work outputs are expected to contribute. We then suggest they create a short list of measures, guided by the performance chain, that are easiest and least costly to obtain, most indicative of successful performance, and that we can use to make frequent decisions for continuous improvement. After trying out a set of measures, we can sometimes calibrate or adjust what we measure and how often, to provide a good foundation for evaluation and decision-making.

- Carl Binder, CEO
]]>
<![CDATA[ Improving Human Performance in Processes ]]>  

I’ve been working as a consultant in organizations for over 40 years. For part of that time, I was intimidated by process improvement specialists because they have a lot of data and a lot of tools.  Whether Six Sigma, Lean, Total Quality, or any of the other approaches for improving the efficiency of, and quality produced by processes, there is a lot of power in many of the methodologies used by process experts.

At some point, I began to look at processes, and how our expert colleagues design, document, and improve them, through the lens of Performance Thinking® models and logic.  I soon realized that Performance Thinking can add value, even for the most sophisticated process improvement methodologies.

­I worked on some big projects alongside process specialists to see what I could offer.  This was when I was still doing big, hands-on performance improvement projects in Global 1000 companies, before I started teaching others how to do Performance Thinking, and coaching them through projects of their own. 

In one highly visible project at a medical products company, I got to work with some especially skilled Lean and Six Sigma practitioners.  We learned some things together, and the robust set of recommendations we made to senior management at the end of the project benefited from their expertise, as well as from my application of Performance Thinking. We had lots of conversations during the project in which we shared data, suggestions for what to do next, and what we were discovering. It was a lot of fun, and we did great work together!

In the end, I think the most important things that I had to offer from Performance Thinking for process improvement can be summed up in two parts – by applying two essentials of Performance Thinking, the Performance Chain and the Six Boxes® Model.

First, we always insist on defining the work outputs, or milestones produced by each step, by each task or sequence of behavior. Not all process professionals do that. They often conduct detailed “voice of the customer” analyses to be sure they know what is expected at the end of the process. But it has been my experience that many of our process improvement colleagues do not identify the things (countable nouns) that each step produces. And they do not always crisply define “what good looks like” for each work output (milestone) in the process.

The Performance Chain model guides us to the work outputs, or accomplishments.  When we identify the work outputs, it is easier to determine where things go wrong inside the process, and to monitor and measure performance at a more detailed level, by counting milestones that meet criteria, and those that do not.  This is usually obvious to the process specialists I have worked with, once stated. But it is not always practiced. The simplicity and care with which we Performance Thinkers define work outputs is helpful. We are very disciplined about that.

Second, when a milestone/work output in a process is deficient, we examine the behavior at a more detailed level for producing it. We can sometimes identify exemplary practices (bits of behavior) that enable "star" performers to produce outputs with greater productivity or quality. Such exemplary performer analysis, inherited from our mentor and predecessor, Dr. Tom Gilbert, offers that powerful performance improvement strategy. 

We can then use the Six Boxes® Model as a framework for analyzing and optimizing factors that influence the behavior, instead of merely tossing it in the bucket of “human error.”  In effect, we use the Six Boxes as “bones” in Ishikawa or fishbone diagrams often used in the analysis process by process specialists.

We Performance Thinkers have some things to add to process improvement, and are happy to work with process experts collaboratively. Of course, we're even more excited to welcome expert process improvement colleagues into our growing global community, since we find that the models and logic of Performance Thinking can work well with what they know and do.

- Carl Binder, CEO


 


 

-

 


-

]]>
<![CDATA[ Accomplishment Based Talent Development ]]>  

Thomas F. Gilbert , one of the great thought leaders on whose work our Performance Thinking models and logic are based, emphasized that to improve, lead, or manage organizational or business performance, we should focus on what he called “valuable accomplishments” rather than “costly behavior.” 

Accomplishments are the things that individuals, teams, and processes contribute to their organizations, while supporting the behavior needed to produce them is costly.  We hire, train and manage people so they will learn and exhibit the behavior needed to produce accomplishments. We give them tools and support, as needed, pay them, and engage them in ongoing development. These are all costly investments.  One of Gilbert’s big ideas was that the worth of any given effort to improve or develop performance is equal to the value of the accomplishments it enables people to produce divided by the cost do develop the behavior for producing them. A simple ROI calculation. Most leaders and talent development professionals do not think this way, even when investing in expensive training and taking people off the job to complete it. But focusing on the accomplishments we expect of people can help us stay aware of what it costs to enable productivity, and perhaps to calculate relative costs of different approaches (e.g., training vs. job aids with coaching) when we decide how to develop people.

When we define accomplishments as “work outputs” – countable things that deliver value to the organization or society – we can list a range of possibilities. They include tangible deliverables or quantifiable transactions. The might also include important decisions, new ideas, valuable relationships, or recommendations. For trainers, coaches and managers, accomplishments should include people who can do or produce something valuable at the end of training or as a result of effective management or coaching.

When we create job profiles based on the important accomplishments of the job, we can drive the entire process of talent development based on value delivered. We can create better behavioral interviewing questions and performance tests, based on accomplishments new hires will need to produce. We can on-board people based on what they need to produce in the first days, weeks and months on the job. We can design accomplishment-based training, as Dr. Joe Harless taught us to do. And we can place people into an ongoing cadence of accomplishment-based coaching with their managers focused on what they need to produce on the job, for upcoming projects or perhaps for the next job on their career path.

Focusing on accomplishments makes the job of monitoring and measuring performance much easier, because we can count “good” ones.  And the value of accomplishments might help us to quantify the salary increases we offer people at the end of the year.

All in all, an accomplishment-based approach to ongoing talent development is more sure-footed, easier to monitor, and more clearly focused on the value the organization needs its people to deliver.

-      - Carl Binder, CEO


]]>
<![CDATA[ Virtual Meetings and Workplaces: Here’s What I Think ]]>  

COVID has taught us a lot about meeting virtually and conducting day-to-day business via web conference and communication platforms.  Before COVID, for example, we would not have thought of offering our Performance Thinking® Coach program virtually, because we viewed coaching as a face-to-face activity, best done between two people “in the room.” Now, of course, we are all accustomed to communicating and collaborating online, every day. Many of our relationships are online, where we likely make direct eye contact more than if we were in a physical space together where “staring” at one another might seem impolite. We have learned to connect virtually, and that is now normal and expected. And I have certainly established and maintained some wonderful personal and professional relatonships with people around the world whom I have never met in person.

Some business leaders are pushing hard to “return to the office.”  They assert that innovation and collaboration are better served by in-person communication.  Some have tried to identify what they think is better, particularly those business leaders who have invested a lot to create physical environments to support and encourage creative collaboration. 

Many employees have pushed back against the idea of returning to the office. For understandable reasons, they prefer the comfort and convenience of their own homes, access to their family members and pets between virtual meetings, saving commute time, and other factors that they value. 

Yet, there are two sides to this consideration. And my conclusion so far is that a hybrid approach to communication and collaboration makes most sense, a combination of in-person and virtual.  I think we should try to be with each other in person some of the time, when possible, if we wish to have the strongest, most vital and productive personal and working relationships. Let me list some of the things I’ve noticed.

I worked 6 feet from my long-time Manager of Special Projects for years before COVID. We were so much on the same page, so connected during most of work days that she could almost read my mind.  She would fill in gaps in my memory or attention to detail, anticipate things needed, and be an utterly invaluable resource for that reason. She witnessed conversations I’d have with colleagues and clients, we shared comments and insights informally between meetings, filling in gaps, refining things said earlier, reminding each other of things we said at other times.  In retrospect, I think that maybe 20% of our communication occurred in scheduled meetings or discussions. Most of it was informal, unplanned, and ad hoc. That is how we worked so closely together.

Once we went virtual, meeting regularly on Zoom and staying in touch with texting and emails, we certainly continued to get things done. But I noticed over the years during which we worked mostly apart that we lost details, what one might even think of as the texture or fabric of our working together. What had become a dynamic collaboration became more of a sharing of to-do lists. While I am sure we adapted, and people in many organizations have been able to adapt more fully than we did, I still believe that a lot was lost.

In the personal context, similar things evolved over time. As an example, I have a dear friend who lives 2000 miles away (actually many such friends, but this one in particular was very close). As we moved from seeing each other in person on a regular basis to 100% virtual communication, I found that there were many cases of incomplete or misunderstood communications. When most substantive communication is scheduled, things get lost because there is no opportunity to poke one’s head into the other’s space or walk by them with a comment in the kitchen, or mention something forgotten over a meal, to clarify an earlier conversation. Maybe more challenging, everyone experiences incomplete or misunderstood communications. But if we are in the same space, and can casually clarify or expand in a side conversation, those things work themselves out. When communication is less frequent and scheduled, gaps or misunderstandings can fester and become bigger and problematic, even leading sometimes to damaging disconnects.

And then there is body language. When we meet “from the chest up” in Zoom or FaceTime, we don’t see how people position their legs, how relaxed or open they might be, when their body movements suggest anxiety, misunderstanding, or emotions of one kind or another. When we cannot see how someone gets up out of a chair, or sits down, or enters the room, or has different levels of energy at different times during the day, we can miss a lot. And especially in this era where empathy between people is becoming a highly regarded value, that can be a huge disadvantage. In virtual communication, we miss the “iceberg” below the surface, which often contains important cues and indicators affecting how we are together.

I have concluded that with respect to personal relationships, we need to recognize that virtual is different, and not necessarily as complete or revealing as in-person. We can simply be mindful of that difference. And in close working relationships, I have come to the conclusion that, if possible, at least one or two days per week ought to be in-person. With some in-person time, all those details, nuances, and the in-between conversations can continue to fill in the gaps that would otherwise be left unfilled, maybe unrecognized, and possibly harmful to both the relationships and the productivity of working together.

What do you think?

- Carl Binder, CEO
]]>
<![CDATA[ Strengthening Practice Of Cultural Values From The Inside Out ]]>  

Most efforts to strengthen organizational culture work from the top down. Leaders agree on the values, model the practices, and in one way or another lead the culture.

This is important, but seldom sufficient, unless it begins with a start-up like Apple or SpaceX, where culture began with a few Founders.

Our cross-cultural work for much of the last decade, particularly the years we spent working with large South Korean companies, such as the LG Group and GS Caltex, taught us a lot about culture. The South Korean business culture had been successsful with command-and-control, and everything that goes with that. Follow orders, don't question your Seniors. Execute. Korean businesses have been extraordinarily successful by being "fast followers" in so many ways. It's how they rose from the ashes after WWII and the Korean Conflict to become an economic power.

But to be innovative, and keep up with accelerating trends, you have to push back, to be a little messier, to engage in vigorous disagreements and welcome new ideas. So when I spoke with the Chairmen of some of the larger companies, they realized they needed to change culture. They were sending many of their young people to American universities for MBAs, betting on American innovation culture. But successful middle and senior management had gotten where they were via command and control.

As I worked with a group of senior leaders at one large companies, I realized that we have a sort of "secret" built into our Performance Thinking® models and logic – a secret for strengthening the practice of cultural values at the ground level.  The units of analysis we have in the Performance Chain model, plus support with the Six Boxes® Model, can address two challenges in important ways.

1) Practices that embody cultural value statements may vary depending on your department, function, or process. Focus on the Customer, or Quality First, or Innovation Leads, might be practiced differently in IT compared with HR or Customer Service. So it's hard to specify practices for individuals and teams from the top, as general forms of behavior, while being specific enough to enable everyone to adapt the values to their contributions.  At least not with certainty. Diversity and Inclusion, for example, might affect the design of user interfaces for IT, hiring and promotion decisions and the configurationof hiring talent acquisition teams for HR, and the photographs chosen for sales collateral by Marketing Communication. And so on.

2) To model, teach, shape, and recognize cultural practices we need more than executives giving occasional awards or HR producing videos of exceptional teams. We need leaders, managers and supervisors helping their people identify what we call their work outputs or contributions (as "countable nouns"), and highlight the ones that might be affected by cultural values. They need to then discuss how criteria for "good" might change based on the agreed-upon value statement, and talk about behavior for making those specific contributions that practices that value.

In other words, we need to help managers and supervisors get specific with their people. And our Performance Thinking approach can help. You can read a slightly nerdy article I published a few years ago on culture or watch our recorded webinar that covers much the same ground.

We address cultural values and practices based on participant interest in both our performance consulting certification program and in our coaching and leadership programs .

- Carl Binder, CEO

]]>
<![CDATA[ Patch the Holes in Your Sales Enablement ]]>  

I’ve been involved with sales performance and sales enablement for nearly 50 years. Three of my four companies have focused almost exclusively on sales performance, spanning multiple industry segments across the globe.  Frankly, not much has changed over the years, other than the high-tech tools, and the continued flow of new sales experts, books, and training programs. Two things I’ve noticed have definitely not changed, and I’d like to bring them to your attention.

First, very few sales organizations document their successful sales process in detail. This is, I believe, partly because people tend to focus on behavior or activity, and not on the milestones or accomplishments that successful activities produce.  Yes, there are sales pipelines and funnels, lists of sales objectives, and sometimes even milestones to mark off phases in the sales process, such as qualified leads, meetings with decision-makers, requests for proposals, and closed deals. Those are good, big objectives. In most business-to-business sales, however, the milestones or possible call objectives are more fine-grained. In the field of performance improvement, we call them accomplishments or work outputs. They include relationships, decisions, documents, agreements, appointments, and sometimes many other small achievements in the process from qualification to closing.  

The most successful sales people know about these progress indicators, at least unconsciously, and identify them when prioritizing activities and setting goals for sales calls.  Things like a good relationships with the receptionist, good decisions about who to meet next, the right sales collateral to the right person at the right time, and so on, are what keep the attention of sales stars. These are the kinds of small accomplishments that can make a difference, are often discussed in passing, but seldom codified or documented. If you have successful sales people of your product in your market, then you can study exactly what they accomplish at each step, when they decide to pursue optional milestones, and how they sequence and juggle their work to achieve them.

By documenting these small outcomes in the sales process, you capture and define a roadmap that less experienced people can follow, and that can guide all of your sales training, coaching and enablement efforts. You will also set the context for identifying the individual and teams of sales people who accomplish each of these milestones most effectively, efficiently, etc.  In other words, you create a framework for identifying exemplary behavior, those small tricks and tactics that the best people discover to move things forward, and that often account for their exceptional results.

Second, few organizations have an optimal framework for designing, configuring, and aligning all the factors that influence sales performance. Sales enablement is often described as systems (the “sales stack”) and content, combined with training. There are many other factors that influence performance, including expectations at many levels, feedback, many types of tools and resources, formal and informal consequences and incentives for doing the right thing, skills, knowledge, optimal selection and assignment, and an alignment of each sales professional's values and motives with that of the company.

The Six Boxes® Model provides a comprehensive framework for sorting and aligning all the factors that influence sales performance, based on principles derived from behavior science. You need to identify how the various things you provide for the sales force function in relation to the behavior of sales people, how they influence behavior.  The Six Boxes, based on what’s called contingency analysis by behavior scientists, gives us a way to be sure we have not missed anything, and that all the things we offer to support sales function together.

I’ve been in so many sales meetings and sales enablement gatherings where the factors that influence behavior are working at cross-purposes, or are simply missing. I recall the VP of a strategic product group at a major software company telling his people how important a specific product was for the company, while senior sales people next to me pointed out that there was nothing special in their compensation plans for this product and that it would be easier to make their numbers with the old products that they already knew.  I’ve seen marketing groups come to sales teams excited about programs they had developed, without any prior input from sales people, only to be told that the programs would be useless, and that sales people would not likely use them.  I’ve observed sales skills training focusing on identifying customer needs and addressing them with solutions, while product knowledge was taught based on features and benefits (“How cool our stuff is”).

These elements of what should be a system of behavior influences do not align, and often there are conflicts and gaps. The groups that provide elements of sales enablement are too often in silos,  doing their usual things rather than aligning with the performance needs of sales reps who need to achieve specific milestones. We  need to patch the holes, and be sure that all parts of the system line up with one another!

My recommendation, after having worked with sales organizations and reviewed a lot of sales enablement literature, is that to be successful, organizations must avoid these mistakes and approach their sales enablement efforts in a more integrated way. They need to view sales performance as a system, in which everything needs to work together to support a path from prospect to close. 

Interview and observe successful sales people and learn from them all the small “next things” they’re trying to accomplish at each step, in each call, in each contact with their prospects and clients. Find out what milestones they target and achieve, and use to estimate how far they are from closing.  Capture and refine a list of milestones – some standard and some optional, depending on circumstances – that experienced people can agree are indicators of progress. Once you have that, and what it really means to achieve each one of them (“when you know you’ve got it”), then build your sales enablement system around these milestones.

Use the Six Boxes® Model to list and sort ALL the factors in each cell of the model needed to ensure that your sales people do what it takes to achieve each milestone efficiently and effectively.  Find the gaps and disconnects, and fix them. Be sure they are all positive and easy to use, rather than punitive. Use this framework to create a continuous learning environment and culture in which sales people learn with and from one another, and sales leaders coach and support their people to achieve each milestone, large or small. Build hiring, training, sales tools and collateral, support staff work outputs, compensation, informal recognition, software, knowledge and skill development, selection and everything else to complete the Six Boxes, and be sure the pieces all fit and work together.

This is easier said than done. But until companies make the investment to accomplish these things with completeness and attention to detail, and use them as a foundation for continuous improvement, they are going to be re-inventing a wheel that is less than optimal. If you begin down the path of systemic, accomplishment-based sales enablement, the ROI will be significant, and over time you will accelerate results.

For assistance, check out our Performance Thinking® for Sales Enablement package of programs and services. And learn more at our short YouTube playlist on sales performance , or from our longer webinar.

- Carl Binder, CEO

]]>
<![CDATA[ Executable Strategic Plans? ]]>  

The well-known Balanced Scorecard experts, Norton and Kaplan, have written a lot about the fact that most strategic plans are not fully executed.  Like many strategic planning experts, they focus on what it takes to execute effectively, and recommend establishing a formal process for execution, engaging leadership in the process, and even having a group or "office" devoted to execution of strategic plans. These are all key recommendations, and they align with principles one could also derive from change management, culture change, systematic performance improvement, and other disciplines devoted to moving whole organizations forward toward goals.

An issue that they, among others, do not seem to address in depth is that both the processes and the products of strategic planning vary greatly among different practitioners and organizations.  A simple Google search for descriptions or definitions of strategic planning, strategic objectives, and even strategy will reveal that people use these words differently. And they have different guidelines and criteria for what constitutes a good strategic plan.  Look up examples of strategic objectives and you will find a mix of outcomes, activities, and abstractions that may or may not be easy to verify. Often planners and those who execute plans rely on the measures chosen to accompany strategic plans for verification and monitoring of progress toward success. But in some ways, this is too late in the planning process. Can we make strategic plans themselves easier to execute?

Dr. Peter Dams, trained as a behavior scientist, has been helping organizations create and implement strategic plans for several decades. Peter has also become a Certified Performance Thinking Practitioner, and has been looking at his own work from the perspective of accomplishment-based performance improvement. Over the last several years he has refined his process, and the plans that he helps clients create, based on insights he has gained from Performance Thinking.

Much of the implementation challenge can be addressed more effectively by using the Six Boxes Model as a systemic framework to enable people in the organization to do what they must do, and to achieve what they must achieve, to implement strategic plans. But there is more to it than that.

As he and Carl Binder, CEO of The Performance Thinking Network, have worked together, they were struck by the possibility that execution of strategic plans could be approached by analogy to Design For Manufacturing. In the 1970s and 1980s, manufacturers who had long struggled with the challenge of optimizing the cost and quality of manufacturing by changing the design of things to be manufactured, made important advances in an approach that has now become widespread. In recent years, Tesla, the automobile manufacturer, has made news by radically changing how cars are designed, to make the manufacturing process simpler, with fewer separate parts and less cost for assembly and testing. Why not think of strategic planning with this in mind? Maybe it's the strategic plan itself that can be improved, not just the implementation process.

With a key insight from Performance Thinking, Peter has made a huge step forward in his work with clients. The idea is simple. Just as we can make the process of improving human performance more straightforward and leaner by insisting on definitions of performance anchored to accomplishments, or work outputs, Peter has learned that insisting on strategic objectives as accomplishments can improve execution.

As those who have learned about Performance Thinking know, we anchor our performance improvement work in accomplishments, or what we call work outputs: things that can be described as "countable nouns." If we identify a  widget, or document, or relationship, or decision as a thing that can be counted, and specify characteristics that make that thing "good," we can more easily identify the behavior or activity needed to produce it, who must be involved, and how we need to support that behavior. It turns out that if we insist that strategic objectives be defined as accomplishments, described as countable nouns that have clearly agreed-upon criteria for "good," then we can more easily develop tactical plans and milestones to achieve those objectives.

While Peter also works with clients to create plans for execution, to monitor progress, and to engage leaders in the process, it is perhaps this simple shift to accomplishment-based strategic planning that set the stage for the innovations he has developed in the last few years.

You can check it out yourself. Look for examples of strategic objectives to see how many of them are described as clear, countable "things" or accomplishments. You will probably find a mix. To use a real case example, is it easier to tell if you have been successful when a strategic objective is expressed as "Investigate whether we should build a second runway at our regional airport" or as "Decision whether to build a second runway at our regional airport"? You be the judge.

]]>
<![CDATA[ Relationships as Valuable Accomplishments ]]>  

At our Summer Institute several years ago, we tried an experiment that went very well! We devoted a session to relationships as valuable accomplishments, and applied Performance Thinking.  We have always listed relationships as a type of valuable accomplishment, teaching both managers/coaches and performance consultants to identify them as important work outputs, when they deliver value in exceptional ways. We then apply the performance improvement logic to defining and improving them. We organized a mini workshop at the Summer Institute, and had a lot of fun with it, while at the same time exploring what otherwise might be thought of as a very "soft" sort of performance

One of our earliest examples of identifying relationships as "work outputs" was at Microsoft, years ago when we were working with their Engineering Excellence group. First level managers, who led small teams of coders, user interface designers, software testing specialists, and others, defined what a "good" relationship between team managers might be. They said that the criteria for "good" would include four things:

  • the two managers respected one another's technical competence
  • they responded in a timely fashion (by end of day or within 24 hours) to one another's communications
  • they worked toward shared goals, and
  • they were able to resolve differences quickly.
They claimed that if relationships among team managers met these criteria, that even if the people did not particularly like one another, they would be able to work together well, support operational efficiency, quality, and employee engagement, among other organizational results.
 
It turns out that most of us, if asked, can list the good relationships we have, both personally and professionally. And, given some prompts and a few minutes, we can typically define what makes them good. In doing so, we can appreciate the good ones, and sometimes gain insights about how to develop or improve the relationships that are not as good as they could be.
 
When we spent an hour with our colleagues at the Summer Institute engaged in this discussion, beside having lots of laughter and joking around (some participants included their personal relationships in the exercise), there were some pretty big insights. So we decided to incorporate much the same discussion in a webinar, which became one of our most popular recordings on our YouTube Channel .
 
You might enjoy the recorded webinar. And we are quite certain that many of us, and quite a few organizations, could benefit from the analysis and insights about improving relationships that came out of those discussions.
]]>
<![CDATA[ Strengthening Clinical Supervision in ABA Organizations ]]>  

As organizational performance consultants, we often help companies accelerate business results and gain a competitive advantage by strengthening their leadership and management capabilities. For example, when working with organizations offering applied behavior analysis services (ABA) to individuals with autism spectrum disorder, we often focus on improving the effectiveness of clinical supervision delivered by behavior analysts and assistant behavior analysts. 

We use an accomplishment-based coaching process to help clinical supervisors develop behavior technicians' performance and engagement, maximize trainees' impact, and continuously improve the quality of services delivered by those they supervise. We focus first on the valuable contributions or accomplishments they provide to the organization (e.g., accurate client records, treatment plans, program modification decisions, relationships, etc.). Next, we determine what behavior is needed to produce those accomplishments and then arrange conditions to support that behavior.

This approach, called Performance Thinking , enables supervisors to define and improve performance using two simple visual models with plain English labels. The Performance Chain, Six Boxes® Model, and an easy-to-follow Performance Improvement Logic provide a framework for defining performance and configuring conditions to develop, improve, and support trainees' or supervisees' performance.  The Six Boxes® Model encompasses all the factors or influences known from behavioral research and practical application to influence behavior.

The Behavior Analyst Certification Board (BACB®) recently sent an email reminding certificants of many helpful resources to support the supervision, assessment, training, and oversight of behavior technicians and behavior analysts in training. If you are a clinical leader responsible for defining and supporting the supervision processes in your organization, those resources can be valuable as you:

  • Ensure that clinical practice and supervision processes meet compliance requirements. 
  • Develop and refine job descriptions for supervisory roles.
  • Set expectations for supervisors and supervisees, link those expectations to organizational results and align consequences with expectations and feedback.
  • Establish performance objectives for supervisors and trainers.
  • Define the criteria for the performance of specific types of supervision (e.g., case supervision and staff supervision).
  • Describe best practices behavior for supervisors. 
  • Identify performance metrics and design systems to measure the performance of supervisors and supervisees.
  • Arrange opportunities for supervisors and supervisees to receive relevant, timely, specific feedback about their performance against expectations.
  • Design training that supports exemplary supervision practices.
  • Enable supervisors to achieve fluent skills and knowledge and use tools for planning, implementing, and documenting all types of supervision.
  • Define selection criteria for applicants for behavior technician and clinical supervisor roles.

To ensure quality supervision, clinical leaders must set clear expectations, provide meaningful feedback, support performance with accurate & easy-to-use information resources, and reward good performance. In addition, clinical supervisors must be able to communicate and collaborate with direct reports about their performance and the factors that support or obstruct it and establish agreed-upon action steps for continuous development. In short, they must learn how to use the resources listed by the BACB® to achieve quality, compliance, and employee development goals.

Performance Thinking® models and programs offer flexible, powerful means for managing, coaching, and continuously developing employees to support high-quality service delivery, making them an excellent fit for behavior analysts responsible for directing and supervising those who deliver ABA services.

For more about accomplishment-based coaching, here is a short YouTube playlist and a more extended webinar .

-       - Shane Isley, BCBA, Senior Consultant

]]>