Validating Persuasive Experience (PX) Theory: Preliminary Results of a Case Study on a Corporate Wellness Program’s Web-based Learning Interfaces

HFES HCS 2018 (3).png


The objective of this study is to gain a clearer understanding of the role Persuasive Experience (PX) may play in wellness-oriented Web Based Learning (WBL) platforms in forming abilities to incite healthier lifestyle (or promoting health behavior) in technology-based systems use. Specifically, individual users’ interactions with a particular corporate wellness WBL platform was examined in relation to their State of Mind and Behavior. The results of this study may assist in better predicting behavior change and sustainable adherence to increased physical activity routines that could reduce risk of behavior sensitive conditions such as cardiovascular disease and its sequelae in U.S. workplaces. The findings of this case study also offer further validation of PX Theory and insight into its potential to aid in developing and implementing more effective persuasive technologies. The hope is that PX might contribute to technological systems design that can measurably and meaningfully assist employees in changing to healthier behaviors to improve their individual health outcomes. The results of this case study also suggest the potential PX driven design for wellness oriented WBL systems may have in improving overall organizational wellbeing and outcomes related to unplanned health insurance expenditures.

I’ve seen the future of healthcare technology…the robots are friendly!

Human Robot.jpg

If we were to believe most things we see in the movies about humanity’s future interactions with Artificial Intelligence, or AI, and robotics we may not perceive them to be events to eagerly anticipate. “The Matrix” and “Terminators 1,2,…n” immediately spring to mind. Thinking of examples that specifically imagine robots and AI in human health care, without creating too many plot spoilers, having an automaton like Michael Fassbender’s android character “David” in the Alien movie prequels, perform any clinical intervention on you would be…ill-advised. However, the unavoidable advance and willing acceptance of technology in our lives and our healthcare environments, in addition to an increased desire for health outcome reliability and expansion of clinical capacity, suggests that having the full spectrum of technological advancements will be a necessity. Indeed, the trajectory towards this brave new world of healthcare delivery and access is already upon us. This claim is supported by the remarkable advances I saw while presenting research on the Persuasive Experience of Wellness Technologies I’ve conducted with Huiyang Li of SUNY Binghamton ( and Catherine Bass of Onlife Health ( at the first ever American Psychological Association’s (APA) Technology Mind and Society conference (TMS)

Now, before any of you become overly alarmed by my willing acceptance of this technological tsunami and start to craft a comment about the devolvement of the humanity in healthcare thanks to the wanton encroachment of technology it is important to know two things about the developments I am specifically referring to:

1)      The health and wellness technologies including AI, VR, AR, and robotics presented within this context were part of research and development being conducted by psychologists, clinicians, systems analysts, and engineers. Their focus is on how we extend human capability through technological advancements. In other words, Human Factors in Healthcare Technology design.

2)      The driving force of the research and innovation presented at APA TMS were those that were firmly rooted in developing assistive technologies through actual human experience and exchange discovery within care settings. In other words, design for technologies that help to support Humanity in Healthcare instead of those that seem to have been designed in a vacuum and without meaningful input from subject matter experts entrenched in the frontlines of health and long-term care access and delivery. 

For example, MIT Media Labs is doing some masterful work with clinical staff that uses “huggable robots” driven by human sentience that is trying to lessen the stress, anxiety, and pain of children in pediatric care settings ( In the United Kingdom, there are some great strides being made in research and development of robot use in long-term care settings to perform Comprehensive Geriatric Assessments, assist residents with retention of their cognitive capabilities and facilitating social connectivity and personal autonomy ( ; All the research presented indicated that most of these technologies are still very much in the “beta” stage and are not meant to be a replacement for, but rather an extension of human beings in healthcare. Thus, the necessity of continuing to test them in actual care environments with clinical provider teams. The promise that this truly Human-Centered Technology holds for extending human capability in care settings is difficult to deny and not to be, at least a little bit, excited about.   

Interestingly, there was a common anecdote that seemed to manifest in many of the human to technology exchange case studies presented at this conference. It is also a developing theme that Huiyang, Catherine, and I have begun to notice in our own research on wellness technology usage and uptake patterns. This was that technology sometimes does not make people feel “judged” in the way other human beings can. Although perhaps a reframing of awareness around this issue in human care providers may be helpful in certain instances, most of us who have spent any appreciable time working in and around care settings understand that to accomplish 100% reliability in human response in every circumstance is simply not realistic. Furthermore, sometimes the “negative judgment” we may be personally perceiving is due to our own internal sensitivities and not necessarily to anything a care provider may be communicating. Additionally, there is a growing body of research on assistive technology that suggests that people seem to flourish with collaborative technologies that allow them to feel more autonomously capable regardless of their age or mental physical capacity. For those of us involved in the design of these care and wellness technologies and environments and processes that surround them, it’s important to remember that our end goal should be not just to support but optimize human capability.

It was no surprise that the Human Factors and Ergonomics Society ( was a support partner in the development of the APA TMS conference. The opportunity for human factors expertise to help successfully shape AI, VR, robotics as well as the environments that they occur within is key to these innovations being implemented in a way that sets us all up for success. It now seems in many high-risk industries that having human factors engineers and/or psychologists actively helping to shape the environment, technology, and equipment design is a given. For instance, we are beginning to see the adoption of the practice of applying human factors to design, testing and implementation become the norm in safety-critical industries such as medical device design. However, healthcare in general despite its direct impact on human health and wellbeing has been relatively slow to adopt the integration of human factors into performance management as a standard practice. To be able to fully and successfully integrate human supportive technologies that can exponentially expand human capacity in care settings comprehensive human factors-based strategic development and implementation will be imperative. This will include consideration of elements such as enhancing natural language processing in healthcare AI as well designing technology and environments of care that mitigate the interference of ambient noise in human to robot exchange performance.

It seems that what humanity’s future interactions with advanced autonomous technologies in healthcare may look like is contingent upon the design dependent parameters we prioritize in them. If thoughtfully designed and implemented with full consideration of human wellbeing, they could exceed our hopes for creating optimal human-centered care. If not…who knows what the outcomes may be. Perhaps, if designed and implemented the right way, these technologies could even help us to be better humans than we ever thought we could be. The promise is there, but it’s up to us to determine how we will move forward.

Byline with pic.jpg

If you build it will they come? 3 Major Takeaways on the Necessity for Human Factors in Healthcare


I think I may be an aspiring anti-conformist. Being around innovators and those individuals engaged in positive disruption of engrained and sometimes complacent systems gives me a perceptible boost of energy. Having just concluded attending and presenting research at the Human Factors and Ergonomics Society 2018 Healthcare Symposium or “HFES HCS,” I can only describe the feeling I am leaving with as an amalgamation of optimism and resolve. The ability to be immersed in groundbreaking research and previewing new and imminent technologies that will soon completely change the way we deliver, and access healthcare is an almost overwhelming experience that’s difficult to explain in mere words. However, not codifying my takeaways from this event feels like a missed opportunity. Therefore, I will do my best to condense the what I have learned from this event and what we can all look forward to seeing in the near future of healthcare.

1.)   Participatory Design is the key to designing and integrating successful systems.

This term has been around for a long time now, and many of us in systems and environmental design have known for a while, at least based on anecdote, that having users help shape product or process engineering typically elicits positive results. However, the preponderance of research that now supports this has truly enabled this concept to make the logical leap from “Plausibility” to “Belief.” Evidence now unequivocally supports that given the increasing complexity of healthcare and the rising encroachment of technology into our lives having cross-functional teams of those that both USE and EXPERIENCE healthcare tools, technology, and environments co-design care delivery product or processes is a necessity for outcome success. This circumstance has gone from a historical concept of “nice to have” to a present-day fact of “need to have.” Which brings me to my second observation…  

2.)   Usable design saves lives. Period.

This is not to say that the design for your product shouldn’t contain other features such as those that increase user efficiency, work effectivity, and comfort. However, regardless of how much sound science, statistics, and/or schmancy technology is included your product or service design if it is not highly usable people will either not use it as much or eventually not use it at all. Don’t believe me? Look around you. How many of your smartphones still have keypads? How many of you still drive a car with a stick-shift? Those of you who follow me on Twitter @LSundahlPlatt know that my favorite quote from this conference was “Let the user ‘break your design,’ that just gives you the opportunity to make it better.” Which brings me to my final point…

3.)   Watch what people do more than simply relying on what they tell you.

Anytime I go to the dentist he asks me if I am flossing regularly every day and I always say, “YES.” The fact of the matter is that although I do try to floss regularly my typical routine for doing so is contingent upon if I have time, when I remember, and when I have something stuck in my teeth. In this particular example, he knows I’m not being truthful and I know I’m just flat out lying. Never the less, we still do this dance and hopefully, no lives will be lost because of my vigilance decrement for flossing. Healthcare delivery and access is a different story. Interviewing clinical subject matter experts, patients, and families although helpful is not enough to really understand how to analyze and design human-centered tools. A great case in point asserted in one of the presentations at HFES HCS is that a great deal of critical patient support equipment is designed for completely immobile patients. The fact is that often patients are rarely completely physically immobile. Additionally, when patients are in the recovery stage often increasing their capable mobility may speed healing. Basically, for effectively analyzing and designing for Human Factors, observation and simulation are essential for creating user-centered tools that assist in care optimization.

Although this is a far from comprehensive list hopefully it gives you a bit of a snapshot of the latest and truly greatest trends in Human Centered Healthcare. I think we all can sense we are very close to a tipping point. The days of restrictive and prescriptive pathways for delivering and accessing care are behind us. Clinical care professionals are rightfully demanding, and often creating, care delivery tools and technologies that are more usable, effective, and have the potential to lessen rather layer on to clinician burn-out. Patients and family members are gravitating towards care access mechanisms that are easy to use, efficient, and can be better customized to fit their individual care needs. Those of us in healthcare research, design and development should be advised that we better either decide we are going to ride this human-centered healthcare wave or duck and cover. Personally speaking…I am going to delightedly grab a big surfboard! So, when it comes to the new healthcare paradigm or process, product, and place design, “If you build it will they come?” Perhaps initially, but not for long if you are not integrating Human Factors into your design and development process.


Headshot 2016.jpg

Lisa Sundahl Platt is the CEO and Founder of UMNSystems LLC.  She writes about the systems and science of organizational and cultural resilience and how it impacts the human experience.

A Human-centered Approach to Achieving Resilience this Year

Happy 2018! Like many of you, I take the occasion of the passage of another year to think about ways to improve the consistency of making more successful lifestyle choices. Because I am also something of an obsessive-compulsive nerd that feels the need to inject the scientific method into almost everything I do, year after year I try to take a methodological and evidence-informed approach to designing my decision process for my behavioral response development.

For instance, using a merit-based review of some recent wellness improvement activity suggestions à la various online information sources I have decided to vote YES on implementing into my own process of daily living:

  • Incrementally waking up earlier each morning to ensure I fit exercise into my schedule

and NO on including:

  • “Detoxing” my physiological system using a coffee enema

The second proposition actually gets a “HARD PASS,” due to its dubious footing in anything that resembles viable wellness research and because the traditional drinking method for enjoying my double espresso Americano on the way to a morning meeting is much easier to do while driving.

The real test of success that most of us face is not in making the decisions meant to change our behavioral response systems. The challenge as the minutes, hours, days, weeks, etc. of the coming year pass-by is in keeping our well-intended responses steadfast and “resilient,” especially in the wake of unexpected life events. Impelled by this problem this year, I started wondering if it might be possible to possibly design algorithms for better maintaining various types of systems response resilience. (I know…who really thinks about this stuff other than me. But again, I am both admittedly systems-science-obsessed and between semesters at the moment, so indulge me).

So, what exactly is “RESILIENCE” is it and why is it important? The WHY in this case is likely easier for us to wrap our minds around than WHAT “resilience” is meant to convey in most cases. You and I know that to be resilient in the face of change or unexpected events is a positive. The ability to respond in a manner that is both anticipatory and adaptive is a good thing whether we are talking about an individual human being or a collective system of some sort. However, the meaning of response resilience is where things may become a bit unclear. Resilience is one of those expressions that due to its context of use and somewhat ambiguous connotation is often difficult to define in a fixed manner. This is because resilience is a construct that exists in two distinct knowledge camps: Human Psychology and Systems Engineering.

Engineering Psychology.jpg

When viewed through the lens of Psychology, resilience embodies those behavioral qualities and attributes of an individual that enable them to thrive and in the face of adversity or unanticipated circumstances (Connor & Davidson, 2003). Resilience in Engineering, on the other hand, describes how elements of a system can sustain operations and evolve under expected and unexpected events in an efficient manner and evolve to an improved state of function (Hollnagel, 2013). These definitions, at first, may seem to be somewhat dichotomous in meaning. However, a key characteristic in both descriptions mandates that entities, whether human or built systems, must respond in an adaptive manner to unforeseen or surprising events to be considered “resilient.” This allows us a window of opportunity in unifying the psychological and engineering definition intent of this response descriptor. To be resilient human, technological, or operational activity systems need to be functionally adaptive. For these systems to be adaptive they typically need a reliable process algorithm and support context. This procedural framework includes the generalizable achievement drivers of: goals, incentives and/or inspiration, a launch point, plans, supportive infrastructure and resources, and finally a viable process for attainment. These inciting elements present a socio-technical interaction environment where the construct of Human Factors proves to be a useful context for establishing and measuring goodness of fit for systems’ response resilience.

Human Factors, like the concept of resilience, is a construct whose study and methods of analysis also reside in the fields of Psychology and Engineering. Human Factors basically is the scientific discipline of evaluating how individual people and human activity systems respond to external stimuli in various environments and under different situations (Hendrick & Kleiner, 2001). Why is this approach so valuable for designing systems’ resilience in general? Because for a system to be resilient it must include some form of sentience, e.g. intelligent input and anticipatory response (Hollnagel, 2013). Although Artificial Intelligence is continuing to gain ground in achieving independent and self-mobilizing cognition, system sentience to date can only be achieved by the presence of human-interaction with systems. Because as humans we know at times we are prone to error and response inconsistency, especially during unpredicted or stressful events, to ensure response success reliability, systems need to have several layers of corrective or fallback provisions. Because the systems planned using Human Factors are person centered rather than process driven they are designed to provide remedial support to human operators. “A resilient system includes not just well-tuned humans but also systems that complement the ability of the human to be resilient.” (Boring 2009) This is a bit different than some other Quality/Behaviors Management tactics whose focus is on compelling human behavior to adapt to consistent procedures always and under any circumstance. Something that any of us who have begun a strict diet as part of a new year resolution, is an approach that often has a variable success rate and fairly short “shelf-life.”  


A Human Factors-based algorithm for sustaining systems response resilience would need to include key response intelligence supporting factors of:  

  • Motive as a kickstart variable and the primary navigator of resilience strategy and response.
  • Ingenuity, which is essentially the applied combination of incentive and innovation to guide design of decision parameters

and finally

  • Support elements that not only fuel resilience achievement but provide corrective contingencies for fallible human response  

These elements supporting human sentience would also need to be both guided and driven by the decision points of:

  • Action: i.e. response
  • Strategy: i.e. plans
  • Systems: i.e. power

At first glance, this algorithmic approach may appear overly complex for guiding seemingly basic behavior change response. However, when viewing this process holistically, we can see that it is relatively straightforward. This evidence-based method essentially provides us both a systems response component checklist and guideline for benchmarking our decision effectiveness progress. Moreover, it’s generalizability allows us to apply it as a template to plan both individual and collective systems for discrete or enterprise-wide response strategy. So, although this method may seem overly complicated in devising plans to get up at 6:00 a.m. rather than 6:30 a.m. to fit in 30 minutes on the elliptical, viewing it as an approach to fundamentally change your overall wellness behavior resilience likely has more merit. Not to mention, the latter more comprehensive intent likely has a more profound and sustainable impact on actual and measurable benefit outcomes.

Whether we like it or not, considering the increasing change in the socio-technical systems which we find ourselves operating, resilient anticipatory feedback and response is a new reality. There is an escalating need for more consistent and dependable action mechanisms within human-guided interaction environments and activity systems. These mechanisms should ideally be comprised of reliable components and straightforward processes. A Human Factors approach to resilient systems design will allow us as human-beings to focus on what we are inherently good at creativity, communication, compassion, and when the occasion calls, drinking our coffee.   



Boring, R. L. (2009, October). Reconciling resilience with reliability: The complementary nature of resilience engineering and human reliability analysis. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 53, No. 20, pp. 1589-1593). Sage CA: Los Angeles, CA: Sage Publications.

Connor, K. M., & Davidson, J. R. (2003). Development of a new resilience scale: The Connor‐Davidson resilience scale (CD‐RISC). Depression and anxiety, 18(2), 76-82.

Hendrick, H., & Kleiner, B. M. (2001). Macroergonomics: An introduction to work system design. Human Factors and Ergonomics Society: Santa Monica, CA.

Hollnagel, E. (2013). Resilience engineering and the built environment. Building Research & Information, 1-8.

Hollnagel, E., Braithwaite, J., & Wears, R. L. (Eds.). (2013). Resilient health care. Ashgate Publishing, Ltd..   

Headshot 2016.jpg

Lisa Sundahl Platt is the CEO and Founder of UMNSystems LLC.  She writes about the systems and science of organizational and cultural resilience and how it impacts the human experience.

Could Biomimetic Engineering for Healthcare Surfaces Reduce the Potential for Dangerous Pathogen Transfer?

UMNSystems and Sappi North America's Conference Paper for the 2017 8th International Conference on Applied Human Factors and Ergonomics (AHFE) is now available from Springer ( The focus of this work is to present preliminary data on how Human-Centered Design and strategic Resilient Surface Technologies could potentially play a role in mitigating dangerous disease-causing pathogen transfer from healthcare surfaces to caregivers, families, and patients.

The authors of this paper will be presenting this information at the 2017 AHFE Conference in Los Angeles this July 17-21.  This article represents work that is part of a recently mobilized and longterm team-based research effort with select Health Systems and Healthcare Evidence-based Architecture & Design research partners. The purpose of which is to assess the ongoing performance impact of patented biomimetic surface-patterning in healthcare environments. 

New Study Published in Journal of Interior Design on the Human Value of Evidence-Based Mental Health Design

JOID F2.jpg

A new peer-reviewed study by Dr. Sheila Bosch and Dr. Daejin Kim of the University of Florida, and Lisa Sundahl Platt of UMNSystems LLC has been published in the latest issue of the Journal of Interior Design. This research is a compilation of several VA-based Mental Health design quality improvement case studies that have been cross-referenced with principles of Human Factors and Ergonomics, and a comprehensive research and literature review that highlights the value identifying environmental factors that may influence Veteran and staff behavior and response within Mental Health Environments.

There is a great need for more study on how the environments that surround us impact our mental health and well-being. The authors of this study hope that this effort may elicit greater focus on this needed area of research

The study may be resourced from the following online link:

How TEDMED 2016 and a "Metaphoric Murmuration" Helped Reset My Vision of the Future of Healthcare


In a year of what has truly been abundant with personal life changing events, one of my most memorable is being accepted as a Frontline Scholar to TEDMED ( I have been an ardent fan of TED and TEDMED for their ability to be vehicles for spreading innovative thought information ever since I became aware of them. I was incredibly honored to be accepted to their 2016 Frontline Scholars group and was secretly hoping that my experience this past November 30-December 2, would be the “shot in the arm” I needed to bolster my hope in the future of healthcare. I must say that my experience was not what I expected. It was better! In a professional career that has spanned now over a quarter of a century, I have attended a lot of conferences as both a speaker and an attendee. TEDMED was distinctly different. Let me tell you why.

The Power of Passionate Curiosity:      

First, I was struck by the immediate realization that, regardless of their role in the conference, everyone seemed willing to share their knowledge and stories and there were multiple platforms for doing so designed into the experience. This, of course, occurred in formal TEDMED talks, but perhaps, more importantly, a great deal occurred in ad-hoc incidental but meaningful conversations that, in my experience, are not at all common at typical industry conferences.  For example, I had a conversation the first night I was there about communicating the relevance and myriad health and community benefits of housing security with one of the keynote speakers I happened to sit next to during the first presentations. Lloyd Pendleton, a former Ford Motor Company executive, is successfully working to eradicate homelessness in Utah ( and generously shared some of his personal insights with me regarding demonstrating a return on investment for the "right thing to do." I also met Paul Lindberg, at a TEDMED “Hive Discussion,” who is a health specialist for the Columbia Gorge region whose program ( was the Culture of Health Prize 2016 Winner from the 2016 Robert Wood Johnson Foundation. He has since shared some anecdote related to meaningful improvements their work has elicited in their part of the country which is relevant to my own doctoral research related to the intersection of sustainable community infrastructure and population health. In fact, there seemed to be a genuine, mutual, and universal curiosity on the part of attendees, speakers, volunteers, and scholars to discover and learn about the spaces that one another were working in and what drives our interest and passion in changing health and wellness in our communities. Some of us discussed ways we can join forces in future efforts to create greater impact, but often conversations occurred because of genuine interest and a desire to learn about innovation and ideas that were outside the realm of our own personal experience.

The Driving Force of Divergent Perspectives:

The first thing I noticed about my fellow frontline scholars, was how truly diverse our backgrounds and experiences were. In our cadre, each one of our individual missions, which were all related to driving health delivery and wellness reliability towards a more human-centered focus, was fueled by distinctly different operators. Our group included members like Lydia Green (@RxBalance), a pharmacist that is actively working as a medical writer to democratize health and wellness data and streamline healthcare communications; Liz Salmi (@TheLizArmy), a patient and punk rock drummer, who is using her own experiences battling Brain Cancer to demystify and improve the healthcare experience; and finally, the amazing Jessica Willet (@jkwillettmd ) an ER doctor and volunteer physician for Flying Doctors of America. That’s right folks, you know those heroic men and women you see on the news working to provide essential emergency medical care to people hit by natural disasters or are refugees of military conflicts, Jessica is one of them. This is just a small sample of some of the remarkable people I had the privilege to meet in this incredible group of diverse and committed change-makers. It is fair to say I am pretty much in awe of the work all of my fellow frontline scholars are engaged in and have stayed in touch with many of them. Once you gain access to this level of innovator and agent of positive disruption of the status quo, you want to stay up to date on what they are working on if only for expanding your own personal inspiration repository

A Murmuration of Motivated Minds:      

Being a lifelong student of the Behavior of Systems, I am constantly seeking out patterns in system component actions that can cause noticeable reaction and results. I have always thought it was a little bit “magic” how even small changes in the behaviors of the parts of a system could significantly change its trajectory and consequently its outcomes. “Human Systems” such as those we see that drive the health and well-being of local and global communities essentially share this same characteristic.  As a testament to how this behavior looks in biological systems, I will offer the following video that was shared in one of the breaks during TEDMED titled “A Bird Ballet."

This amazing synchrony of independent organisms is referred to as a “Murmuration.”  You will notice how sometimes a smaller part of the flock, or system of starlings in flight, will at times break off into an open part of the sky, but then comes back to the others to alter the direction and change the shape of the whole. After a bit of investigative research into murmurations, I was struck by what we know about the science behind this bird behavior and how it impacts the design of their flight patterns. In my opinion, this responsive fluidity in system change could be especially relevant to driving change in human powered systems.  This was my last and major takeaway from TEDMED and this phenomenon serves as an appropriate symbol to sum up my experience. That is even when individuals are working in different areas of interest if these separate efforts are linked by a unified shared vision, and we can connect and share information from multiple perspectives, amazing things can occur. You only need to watch the video to see how the science behind signals of shared information can meaningfully change a system’s trajectory.

A key point, those willing to “break away from the flock” to gather or share information are the ones to watch. If you want to see the brighter and bolder future of human-centered healthcare pay attention to these people. They are the ones, because of their divergent and unconventional ideas, passionate and compassionate curiosity, and willingness to be the first to venture away from the flock, whom I believe can finally move health and wellness systems toward true resilience. Meeting some of them at TEDMED has reaffirmed my hope in the future of human health and well-being and transformed my spirit to strive harder and more boldly into re-engineering our current system. To reference an overused, but in this context apt, quote of Margaret Mead “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has.”



Lisa Sundahl Platt is the CEO and Founder of UMNSystems LLC.  She writes about the systems and science of organizational and cultural resilience and how it impacts the human experience.


Establishing Trust through Traceability

I think that nowadays, any time any of us are feeling especially masochistic, we know we can successfully meet our daily exasperation quotas just by turning on the news.  Full disclosure, this is in no way meant to be a diatribe on the media and their communication of information related to world events and the US-based political circus that continues to unfold before our weary eyes.  Rather it’s an observation of what seems to be an emerging theme that continues to grow within our collective consciousness. A pervasive and persistent lack of trust in the validity and transparency of our information sources.

As a scientist, I have been inured to have a healthy degree of skepticism. The education we receive in the sciences in applying research and analytic methods to assess cause and effect drills the concept of “no theory can ever be proven in absolute terms” into us at early and iterative stages of our training.  Similarly, as a designer we are conditioned to perpetually seek a “goodness of fit” for any temporary solution that may provide a moderating fix to a constantly evolving problem. At first glance, these two disciplines may not seem eminently relatable. As someone who practices both though I can attest that indeed they are. In fact, one of the key elements that unite these two fields is the confidence one has in the veracity and completeness of the data that is guiding their decision making and read of circumstances surrounding a certain problem or opportunity. In scientific analysis, this reliance on normative data to produce predictable results is expressed in quantitative terms referred to as a “confidence interval.”  It essentially measures the degree of specifically how much the researcher trusts that the information they are using is producing reliable outcomes in which valid inferences can be made. However, in applying a scientific approach to designing solutions to problems in their actual settings I would recommend a slightly different approach to augment trust of information.  One that doesn’t require pontificating about “p values” and is far more straightforward. Traceability. 

Traceability tools in Systems Engineering provide a complete macro and micro operational assessment that lends itself to the development of subsystem, component, and system support infrastructure matrices allowing for traceability from high-level to low-level design requirements and vice versa. (Blanchard & Fabrycky, 2011).  More simply put, these tools allow all stakeholders of an improvement or design team a simple graphic interface to better interpret the connections and relationships between goals and objectives and the variables that are influencing them.  Additionally, it provides the system engineer or change management specialist a viable set of quantifiable terms to incorporate into system design parameters and lifecycle performance analysis.  Most importantly, it provides a framework for providing comprehensive and easy to digest information related to problem resolution or opportunity improvement. The presentation of an inclusive picture of system factors and their behaviors that everyone can understand and discuss builds trust in stakeholders. This is largely due to the fact that everyone involved in the effort has access to the same complete and easy to comprehend data.  Everyone trusts the information informing the design of a system to achieve a specific vision because, in essence, everyone is reading off the same sheet of music metaphorically speaking.   

One of my favorite traceability tools is what I refer to as a “What/Why” matrix.  This is a simple to use and extremely portable tool that can be used essentially by any group of individuals, regardless of background or training, that wish to collectively realize a specific objective.  In using this tool, a team leads with the “WHYs” they think a specific goal needs to be accomplished and then ranks these in terms importance on a scale of 1-10 (1 being least important, 10 being most important).  

We start with the “WHY’s” in this approach because it automatically introduces “purpose” as opposed to merely “process” into achieving a goal. Evidence supports that most individuals are more likely to commit to accomplishing goals they feel are personally meaningful to them and have the potential to directly improve their own circumstances (Latham & Locke, 2007). Leading with your “WHYs” can also be a critical asset in building team tenacity to get through “the ugly middle” of many projects.

Next, the team lists the “WHATs” in the system development process they think will help them accomplish their objective and then ranks these factors in terms of importance.

Finally, the group links which “WHAT” factors they feel will be most instrumental to achieving specific “WHY” factors.

Then the process is simply to multiply the WHAT and WHY weights along with their correlation links to get a system’s design criteria or process step prioritized.

This traceability tool demonstrates how what can at project inception, first be perceived as an inconsequential “WHAT” in a project process step can quickly rise to the top in importance because of its potential to support the purpose and meaning behind goal achievement. This method which ties objective achievement to the values that group members find personally valuable is a great way of imbuing team trust. Stakeholders trust that in pursuing a defined goal they are also fulfilling their own hopes and dreams because they can actually see how the process based “WHATs” and the purpose-based “WHYs” are linked and consequentially prioritized. The framework also provides the system engineer or change management specialist some critical metrics to use to inform system design development and performance benchmarking. This approach can be a great first step in integrating co-design into performance-driven system development and demystifying data collection and interpretation in design and improvement teams.

In future posts, we will discuss how traceability tools and co-design can improve the efficacy of emergent system processes and resilience in system infrastructure. Both critical factors in achieving solutions to “Wicked Problems.”


  1. Blanchard, B., & Fabrycky, W. J. (2011). Systems engineering and analysis (5th ed., Prentice Hall   international series in industrial and systems engineering). Boston: Prentice Hall.
  2. Latham, G. P., & Locke, E. A. (2007). New developments in and directions for goal-setting research. European Psychologist, 12(4), pp.290-300.

Lisa Sundahl Platt is the CEO and Founder of UMNSystems LLC.  She writes about the systems and science of organizational and cultural resilience and how it impacts the human experience.


Strategies for Synchronizing Social Systems and Safe Behaviors

Alliteration aside, the concept of “Safety” has been an overriding theme of the Summer of 2016.  It seems as though most newsworthy topics over the past several months have sparked conversations about how we can find ways of better preserving and promoting personal and public safety.  The industry in which I have spent the better part of two decades working has been no exception to this trend.  In May of this year, the Medical and Scientific Journal BMJ published a John Hopkins study that indicated 250,000 deaths per year in America were due to medical error. This makes Human Error the third leading cause of mortality in the US (Makary & Daniel, 2016).  Although a sobering figure, evaluating this type of data and its variables are critical to better understanding potential safety gaps and designing systems that better mitigate poor outcomes caused by them.  

Over the years I have worked with many organizations throughout the US and abroad, helping them to develop and implement performance improvement infrastructures to improve their safety-related process outcomes.  In doing so, I have often encountered two corrective action response approaches from organizational leaders that want to impact their safety related quality data; I call the first approach “Dystopian” and the other “Denial.” The Dystopian Safety Improvement method is frequently exemplified by a draconian lockdown on absolutely anything which could introduce risk regardless of how remote the possibility.  The results of this can create service experiences for both staff and clientele that feel more Orwellian and punitive than secure and inviting.  Ironically this approach can also lead to employee burnout which contributes to a rate of diminishing returns in safety measure effectiveness (Leiter, 1997).  The Denial approach, in the context I am defining it, although not necessarily ignoring safety data, attempts to address it by layering other positive components into service delivery mechanisms.  Although these amendments can be tangential to improving a product and/or service they are typically unrelated to actual gap or problem resolution. They instead serve as convenient distractions and a sort of, “Apart from that Mrs. Lincoln how did you enjoy the play?” approach to performance improvement.  Luckily this is not the only leadership response I have witnessed when safety improvement outcomes become salient in an organization.  
Proactive leaders, just like any good student of risk probability, understand that opportunities for performance improvement related to safety are an inevitable and continuous part of managing complex and dynamic systems.  Organizations such as healthcare are particularly susceptible to this due to the type of services they are providing and the multivariate and mutable nature of the environments in which they are operating.  One methodology that is proving to be especially useful in parsing both safety-related causes and outcomes and offering viable solutions for error mitigation is Cognitive Systems Engineering. 

Cognitive Systems Engineering, which came into being in the late 1960s, was born out of the idea that increasing automaticity alone was insufficient to ensuring reliable safety in any complex process (Flach, 2015).  It puts forth the notion that physical (environment/equipment) and logical (software/IT) systems must incorporate human-centered control design that facilitates performance goal-oriented behavior and reduces performance risk-oriented behavior in the human beings that are interacting with them.  Furthermore, its analytic tools introduce easy to understand and visual bi-directional traceability between the “Why”, “What,” and “How” of safety outcome cause and effects. (Lee 2010).  To summarize, it implements an approach to creating robust and evidence-based architectures that can reliably support safe outcomes within systems that are both flexible to changeable circumstances and not soul-crushing to the people using them to deliver or receive services. 

I will be providing some examples of these tools and their usage in future posts along with evidence of how they can be absolute game changers in increasing safety and performance reliability in sustainable ways.   


  1. Flach, J. (2015). Supporting productive thinking: The semiotic context for Cognitive Systems Engineering (CSE). Applied Ergonomics, Applied Ergonomics.
  2. Lee, Katta, Jee, & Raspotnig. (2010). Means-ends and whole-part traceability analysis of safety requirements. The Journal of Systems & Software, 83(9), 1612-1621.
  3. Leiter, M., Robichaud, L., & Quick, James Campbell. (1997). Relationships of Occupational Hazards with Burnout: An Assessment of Measures and Models. Journal of Occupational Health Psychology, 2(1), 35-44.
  4. Makary MA; Daniel M. (May 2016) Medical error-the third leading cause of death in the US.  BMJ.  2016; 353: i2139


Lisa Sundahl Platt is the CEO and Founder of UMNSystems LLC.  She writes about the systems and science of organizational and cultural resilience and how it impacts the human experience.