+1443 776-2705 panelessays@gmail.com
  

Week 03: Standardization (Part I)

ANSI – American National Standards Institute. (2017, Oct.). Dr. John Halamka, Harvard Medical School, on Healthcare Standardization. Youtube.com.

https://www.youtube.com/watch?v=KYLeZJcf9jc 

ASSIGNMENTS DETAILS ATTACHED


Week 03: Standardization (Part I)

·
Week 03 Lecture Video – Healthcare StandardizationURL


ANSI – American National Standards Institute.
 (2017, Oct.). Dr. John Halamka, Harvard Medical School, on Healthcare Standardization. Youtube.com.

https://www.youtube.com/watch?v=KYLeZJcf9jc

Week 03 Discussion (200 words)

Question(s) – answer both questions:

· Where do you see the balance between standardized EHR systems and customized EHR systems? What are the advantages of customization? What are the disadvantages?

· If you were working to create a policy on standardized terminology and data collection methods for your organization, what would you be sure to include?

Week 03 Assignment – Tiger VLE Module completion

Complete your TIGER VLE Module: Systems Analysis, Planning, and Design

Upload a picture of your “Module Completion / Quiz Summary” into the dropbox below. You can use the print function to save as PDF or take a screen shot and save as a picture.

Week 03 Assignment – Health IT Evaluation – Part I

For this week’s assignment:

· Complete sections I-V of the 
Health IT Evaluation Toolkit.
 The toolkit should be used as a guide for the student to write a word document that includes the sections for this assignment. 
For section II of the toolkit, be sure to select two or more important stakeholders and discuss their goals and vision for success of this proposed project implementation. 
APA formatting (with the exception of an abstract) is required for this submission.

· Submit your documents below.

For complete information refer to your 


Health IT Evaluation Project



.

If your organization is not considering implementing any new technology in the near future, you may elect to evaluate the process that was completed for a health IT system already in existence.


Health IT Evaluation Project

Health information technology (IT) is largely utilized for the purposes of improving care, healthcare quality, and patient safety. Implementing health IT systems in healthcare organizations is an incredibly expensive, detailed endeavor. Oftentimes, large project implementations, while intended to increase efficiency, ultimately do not live up to the potential. The causes for this are multifaceted, and thus early stages of evaluation are necessary to avoid costly and dangerous technology failures (Cusack et al., 2009).

The purpose of this project is for you to perform an evaluation of a proposed health IT system in your place of employment, using components of the Health Information Technology Evaluation Toolkit published by the Agency for Healthcare Research and Quality (AHRQ). If your organization is not considering implementing any new technology in the near future, you may elect to evaluate the process that was completed for a health IT system already in existence.

The 


Health IT Evaluation Toolkit  by Cusack, et al.


 is also listed on the Require Reading page as a Resource. The toolkit should be used as a guide for the student to write APA style formatted word document(s) that include the sections for the different assignments. 

Week 3: Health IT Evaluation Toolkit Assignment (Part 1)

For this week’s assignment, complete sections I-V of the Health IT Evaluation Toolkit. APA formatting (with the exception of an abstract) is required for this submission. For section II of the toolkit, be sure to select two or more important stakeholders and discuss their goals and vision for success of this proposed project implementation.

Week 5: Health IT Evaluation Toolkit Assignment (Part 2)

For this week’s assignment, complete sections VI – X of the Health IT Evaluation Toolkit. APA formatting (with the exception of an abstract) is required for this submission.

Week 7: Health IT Evaluation Toolkit Assignment (Part 3)

For this week’s assignment, complete sections XI – XV of the Health IT Evaluation Toolkit. APA formatting (with the exception of an abstract) is required for this submission.

Week 9: Health IT Evaluation Toolkit Assignment (Part 4)

For this week’s assignment, complete sections XVI – XVII of the Health IT Evaluation Toolkit. APA formatting (with the exception of an abstract) is required for this submission.

Week 11: Final System Evaluation Project paper

For this week’s assignment, complete section XVIII of the Health IT Evaluation Toolkit. APA formatting (including an abstract) is required for this submission.

In earlier weeks your assignment deliverables are formative components of the toolkit. This week, you will turn in your completed project, which is the total evaluation.

Your final submission, the evaluation plan, should be formatted as follows:

1. Short description of the project

2. Goals of the project

3. Questions to be answered by the evaluation effort

4. First measure to be evaluated – quantitative

a. Overview – general considerations

b. Timeframe

c. Study design/comparison group

d. Data collection plan

e. Analysis plan

f. Power/sample size calculations

5. Second measure to be evaluated – qualitative

a. Overview – general considerations

b. Timeframe

c. Study design

d. Data collection plan

e. Analysis plan

6. Subsequent measures to be evaluated in the same format

He a lth In fo rm a tio n Te c h n o lo g y
Eva lu a tio n To o lkit

2009 Up d a te

Prepared for:
Agency for Healthcare Research and Quality
U.S. Department of Health and Human Services
540 Gaither Road
Rockville, MD 20850
www.ahrq.gov

Contract No. 290-04-0016

Prepared by:
Caitlin M. Cusack, M.D., M.P.H., NORC of the University of Chicago
Colene M. Byrne, Ph.D., Center for IT Leadership, Partners HealthCare System
Julie M. Hook, M.A., M.P.H., John Snow, Inc.
Julie McGowan, Ph.D., F.A.C.M.I., Indiana University School of Medicine
Eric Poon, M.D., M.P.H., Division of General Medicine and Primary Care,
Brigham and Women’s Hospital
Atif Zafar, M.D., Regenstrief Institute Inc.

AHRQ Publication No. 09-0083-EF
June 2009

Health Information Technology Evaluation Toolkit: 2009 Update i

This document is in the public domain and may be used and reprinted without permission except
those copyrighted materials that are clearly noted in the document. Further reproduction of those
copyrighted materials is prohibited without the specific permission of copyright holders.

Suggested Citation:
Cusack CM, Byrne C, Hook JM, McGowan J, Poon EG, Zafar A. Health Information
Technology Evaluation Toolkit: 2009 Update (Prepared for the AHRQ National Resource Center
for Health Information Technology under Contract No. 290-04-0016.) AHRQ Publication No.
09-0083-EF. Rockville, MD: Agency for Healthcare Research and Quality. June 2009.

Acknowledgments
The authors would like to thank numerous members of the AHRQ National Resource Center’s
Value and Evaluation Team for their invaluable input and feedback: Davis Bu, M.D., M.A.
(Center for IT Leadership); Karen Cheung, M.P.H. (National Opinion Resource Center); Dan
Gaylin, M.P.A. (National Opinion Resource Center); Julie McGowan, Ph.D. (Indiana University
School of Medicine); Adil Moiduddin, M.P.P. (National Opinion Resource Center); Anita
Samarth (eHealth Initiative); Jan Walker, R.N., M.B.A. (Center for IT Leadership); and Atif
Zafar, M.D. (Indiana University School of Medicine). Thank you also to Mary Darby, Burness
Communications, for editorial review.

The authors of this report are responsible for its content. Statements in the report should not be
construed as endorsement by the Agency for Healthcare Research and Quality or the U.S.
Department of Health and Human Services.

Health Information Technology Evaluation Toolkit: 2009 Update ii

Contents

Introduction ……………………………………………………………………………………………………. 1
Section I: Developing an Evaluation Plan ……………………………………………………… 3

I. Develop Brief Project Description ………………………………………………………3

II. Determine Project Goals ……………………………………………………………………3

III. Set Evaluation Goals …………………………………………………………………………4

IV. Choose Evaluation Measures ……………………………………………………………..4

V. Consider Both Quantitative and Qualitative Measures …………………………..5

VI. Consider Ongoing Evaluation of Barriers, Facilitators, and Lessons
Learned …………………………………………………………………………………………..7

VII. Search for Other Easily Accessible Measures ………………………………………7

VIII. Consider Project Impacts on Potential Measures ………………………………….9

IX. Rate Your Chosen Measures in Order of Importance to Your Stakeholders 10

X. Determine Which Measurements Are Feasible …………………………………..10

XI. Determine Your Sample Size ……………………………………………………………11

XII. Rank Your Choices on Both Importance And Feasibility …………………….12

XIII. Choose the Measures You Want To Evaluate …………………………………….13

XIV. Determine Your Study Design ………………………………………………………….13

XV. Consider the Impact of Study Design on Relative Cost And Feasibility ..15

XVI. Choose Your Final Measures ……………………………………………………………17

XVII. Draft Your Plan Around Each Measure ……………………………………………..19

XVIII. Write Your Evaluation Plan ……………………………………………………………..20

Section II: Examples of Measures That May Be Used to Evaluate Your Project .21
Section III: Examples of Projects ……………………………………………………………………42

Appendixes

Appendix A: Sample Size Example …………………………………………………………………..56
Appendix B: Health IT Evaluation Resources ……………………………………………………58
Appendix C: Statistics Resources …………………………………………………………………….59

Health Information Technology Evaluation Toolkit: 2009 Update 1

In tro d u c tio n

We are pleased to present this updated version of the Agency for Healthcare Research and
Quality (AHRQ) National Resource Center for Health Information Technology (NRC)
Evaluation Toolkit. This toolkit provides step-by-step guidance for project teams who are
developing evaluation plans for their health information technology (health IT) projects.

You might ask: “Why evaluate?” For years, health IT has been implemented with the goals
of improving clinical care processes, health care quality, and patient safety, without questioning
the evidence base behind the true impact of these systems. In short, these systems were
implemented because they were viewed as the right thing to do. In the early days of health IT
implementation, evaluations took a back seat to project work and frequently were not performed
at all, at a tremendous loss to the health IT field. Imagine how much easier it would be for you
to implement your project if you had solid cost and impact data at your fingertips.

Health IT projects require large investments, and, increasingly, stakeholders are demanding
information about both the actual and future value of these projects. As a result, we as a field are
moving away from talking about theoretical value, to a place where we measure real value. We
have reached a point where isolated studies and anecdotal evidence are not enough – not for our
stakeholders, nor for the health care community at large. Evaluations must be viewed as an
integral piece of every project, not as an afterthought.

It is difficult to predict a project’s impact, or even to determine impact once a project is
completed. Evaluations allow us to analyze our predictions about our projects and to understand
what has worked and what has not. Lessons learned from evaluations help everyone involved in
health IT implementation and adoption improve upon what they are doing.

In addition, evaluations help justify investment in health IT projects by demonstrating project
impacts. This is exactly the type of information needed to convert late adopters and others
resistant to health IT. We can also share such information with our communities, raising
awareness of efforts in the health IT field on behalf of patient safety and increasing quality of
care.

Thus, the question posed today is no longer why do we do evaluations but how do we do
them? This toolkit will help assist you through the process of planning an evaluation. Section I
walks you and your team step-by-step through the process of determining the goals of your
project, what is important to your stakeholders, what needs to be measured to satisfy
stakeholders, what is realistic and feasible to measure, and how to measure these items.

Health Information Technology Evaluation Toolkit: 2009 Update 2

Section II includes a list of measures that you may use to evaluate your project. In this latest
version, new measures have been added to each of the domains, and a new domain has been
added around quality measures. For each domain, we include a table of possible measures,
suggested data sources, cost considerations, potential risks, and general notes. A new column
has been added to this updated version of the toolkit, with links to sources that expand on how
these measures can be evaluated and with references in the literature.

Section III contains examples of a range of implementation projects with suggested
evaluation methodologies for each. In this latest version, two examples have been added on
computerized provider order entry (CPOE) and picture archiving and communication systems
(PACS).

We invite and encourage your feedback on the content, organization, and usefulness of this
toolkit as we continue to expand and improve it. Please send your comments or questions about
the evaluation toolkit or the National Resource Center to [email protected]

Health Information Technology Evaluation Toolkit: 2009 Update 3

S e c tio n I: De ve lo p in g a n Eva lu a tio n P la n

I. De ve lo p Brie f P ro je c t De s c rip tio n
This may come straight out of your project plan or proposal.

II. De te rm in e P ro je c t Go a ls
What does your team hope to gain from this implementation? What are the goals of your
stakeholders (CEO, CMO, CFO, clinicians, patients, and so on) for this project? What needs to
happen for the project to be deemed a success by your stakeholders?

Example:

To improve patient safety; to improve the financial position of the hospital; to be seen by our
patients as making patient safety an organizational priority.

Health Information Technology Evaluation Toolkit: 2009 Update 4

III. S e t Eva lu a tio n Go a ls

Who is the audience for your evaluation? Do you intend to prepare a report for your
stakeholders? Are you required to prepare a report for your funders? Will you use the evaluation
to convince late adopters of the value of your implementation? To share lessons learned? To
demonstrate the project’s return on investment? To improve your standing and competitive edge
in your community? Or are your goals more external? Would you like to share your experiences
with a wider audience and publish your findings? If you plan to publish your findings, this may
affect your approach to your evaluation.

Example:

To prepare a report for the stakeholders and funders of the project.

IV. Ch o o s e Eva lu a tio n Me a s u re s

Take a good look at your project goals. What needs to be measured in order to demonstrate
that the project has met those goals? Brainstorm with your team on everything that could be
measured, without regard to feasibility. Section II provides a wide range of potential measures
in the following categories:

? Clinical Outcomes Measures
? Clinical Process Measures
? Provider Adoption and Attitudes Measures
? Patient Adoption, Knowledge, and Attitudes Measures
? Workflow Impact Measures
? Financial Impact Measures

Health Information Technology Evaluation Toolkit: 2009 Update 5

Your team might find it helpful to break down your measures in similar categories. Keep in
mind that measures should map back to your original project goals, and that they may include
both quantitative and qualitative data.

Example:

(1) Goal: To improve patient safety. Measurement: The number of preventable adverse drug
events is reduced post-implementation. (2) Goal: To improve the hospital’s financial position.
Measurement: The number of claims rejected is reduced post-implementation. (3) Goal: To be
seen by our patients as making patient safety an organizational priority. Measurement: In
patient surveys, patients answer “yes” to the question, “Do you believe this hospital takes your
safety seriously?”

V. Co n s id e r Bo th Qu a n tita tive a n d Qu a lita tive Me a s u re s

Many people feel more comfortable in the realm of numbers and, as a result, frequently
design their evaluations solely around quantitative data. But this approach provides only a
partial picture of your project. Quantitative data can lead to conclusions about your project that
miss the larger picture.

For example:

A hospital implements a new clinical reminder system with the goal of increasing compliance
with health maintenance recommendations. An evaluation study is devised to measure the
percentage change in the number of patients discharged from the facility who receive
influenza vaccines, as recommended.

The study is carried out, and, to the disappointment of the research team, the rates of
vaccinated patients discharged pre- and post-implementation do not change. The team
concludes that their implementation goals have not been met, and that the money spent on
the system was a poor investment.

Health Information Technology Evaluation Toolkit: 2009 Update 6

But a qualitative study of the behaviors of the clinicians using the new system would have
reached different conclusions. In this scenario, the qualitative study reveals that clinicians,
bombarded with a number of alerts and health maintenance reminders, click through the
alerts without reading them. The influenza vaccine reminders are not read; thus the rates of
influenza vaccination remain unchanged.

The study also notes that a significant number of clinicians are distracted by and frustrated
with the frequent alerts generated by the new system, with no way to distinguish the more
important alerts from the less important ones. In addition, some clinicians are unaware of
the evidence supporting this vaccine reminder and of the financial (pay-for-performance)
implications for the hospital if too few patients receive this vaccine. One clinician had the
idea that the vaccine reminder could be added to the common admission order sets. These
findings could be used to refocus the design, education, and implementation efforts for this
intervention.

However, lacking a qualitative evaluation, these insights are lost on the project team.

Qualitative studies add another important dimension to an evaluation study: they allow
evaluators to understand how users interact with a new system. In addition, qualitative studies
speak to a larger audience because they generally are easier to understand than quantitative
studies. They often generate anecdotes and stories that resonate with audiences.

Therefore, it is important to consider both quantitative and qualitative data in your evaluation
plan. Please add any qualitative measures you would like to consider.

The National Resource Center has developed a Compendium of Health IT Surveys that may
be found on the NRC Web site at Health IT Survey Compendium. This tool allows a user to
search for publically available surveys by survey type, technology, care setting, and targeted
respondent. These surveys can then be used as is, or can be modified to suit a user’s needs.

Health Information Technology Evaluation Toolkit: 2009 Update 7

VI. Co n s id e r On g o in g Eva lu a tio n o f Ba rrie rs ,
Fa c ilita to rs , a n d Le s s o n s Le a rn e d

Lessons learned are important measures of your project and typically are captured using
qualitative techniques. These lessons may reflect the facilitators and barriers you encountered at
various phases of your project. Barriers may be organizational, financial, or legal, among many
other areas. Facilitators might include strong leadership, training, and community buy-in.

This type of information is extremely valuable not only to you but also to others undertaking
similar projects. In formulating a plan for capturing this information, consider scheduling
regular meetings with your project team to discuss the issues at hand openly and to record these
discussions. In addition, you could conduct focus groups with appropriate individuals to capture
this information more formally. For example, you could ask nurses who are using a new
technology about what has gone well, what has gone poorly, and what the unexpected
consequences of the project have been. Another way to capture valuable lessons learned is to
conduct real-time observations on how users interact with the new technology. Consider how
you could incorporate these analysis techniques into your evaluation plan. Clearly state what
you want to learn, how you plan to collect the necessary data, and how you would analyze the
data.

VII. S e a rc h fo r Oth e r Ea s ily Ac c e s s ib le Me a s u re s

Hospitals collect a tremendous amount of data for multiple purposes: to satisfy various
Federal and State requirements, to conduct ongoing quality assurance evaluations, and to
measure patient and staff satisfaction. Therefore, there are teams within your facility already
collecting data that might be useful to you. Reach out to these groups to learn what information
they are currently collecting and to determine whether those data can be used as an evaluation
measure.

In addition, contact the various departments in your facility to learn the reporting capabilities
of their current software programs as well as current data collection methods. There may be

Health Information Technology Evaluation Toolkit: 2009 Update 8

opportunities to leverage these reporting capabilities and data collection methods for your
project. For example, does the billing department already measure the number of claims
rejected? Is there a team already abstracting charts for information that your team would like to
examine? Could your team piggy-back with another group to abstract a bit of additional
information? Are there useful measurements that could be taken from existing reports?
Likewise, you may find that activities you are planning as part of your evaluation would be
helpful to other teams within your facility. Cooperation in these activities can increase goodwill
on both sides.

Section II outlines several potential measures and provides sources where you may find those
measures.

Example:

The finance department’s billing system can report the number of emergency department
encounters that are coded as levels I, II, III, IV, and V. These reports are simple to run, and the
finance department is willing to run them for you. You already know that many visits are down-
coded because a visit was not sufficiently documented – an oversight that can lead to large
revenue losses. A new evaluation measure is added to determine whether the new
implementation improves documentation so that visits are coded appropriately and revenues are
increased.

Health Information Technology Evaluation Toolkit: 2009 Update 9

VIII. Co n s id e r P ro je c t Im p a c ts
o n P o te n tia l Me a s u re s

A project may have many impacts on a facility, but often these impacts depend on where the
project is implemented – for example, across groups of hospitals versus across a single facility
versus within a single department. In addition, impacts may vary according to the group that is
using a new technology – for example, all facility clinicians versus nurses only. Consider the
potential measures on your list and how your project might impact those measures. You may
find that this exercise eliminates some measures from your list if you are trying to measure
outcomes that will not be impacted by your project.

Health Information Technology Evaluation Toolkit: 2009 Update 10

IX. Ra te Yo u r Ch o s e n Me a s u re s in Ord e r o f
Im p o rta n c e to Yo u r S ta ke h o ld e rs

Now that your team has a list of measures to evaluate, rate each measure in order of importance
to your stakeholders, i.e., your CEO, clinicians, or patients, and so on You could use a scale
such as: 1 = Very Important, 2 = Moderately Important, 3 = Not Important. This will help you
begin to filter out those measures that are interesting to you but will not provide you with
information of interest to your stakeholders.

1. Very Important:____________________________________________________
____________________________________________________________________

2. Moderately Important:_______________________________________________
____________________________________________________________________

3. Not Important:_____________________________________________________
____________________________________________________________________

X. De te rm in e Wh ic h Me a s u re m e n ts Are Fe a s ib le

Now examine your list to determine which measures are feasible for you to measure. Be realistic
about the resources available to you. Teams frequently are forced to abandon evaluation projects
that are labor-intensive and expensive. Instead, focus on what is achievable and on what needs
to be measured to determine whether your implementation has met its goals. For example, you
might want to know whether your implementation reduces adverse drug events (ADEs). While
this is a terrific evaluation project, if you have neither the money nor the individuals needed for
chart abstraction, the project will likely fail. Keep focused on what can be achieved. Again, you
can use a ranking scale: 1 = Feasible, 2 = Feasible with Moderate Effort, 3 = Not Feasible.

1. Feasible:__________________________________________________________
____________________________________________________________________

2. Moderate Effort:___________________________________________________
____________________________________________________________________

3. Not Feasible:_______________________________________________________
____________________________________________________________________

Health Information Technology Evaluation Toolkit: 2009 Update 11

XI. De te rm in e Yo u r S a m p le S ize

A second, extremely important, facet of feasibility is sample size. An evaluation effort can hinge
on the number of observations planned or on the frequency of events to be observed. The less
frequently the event occurs, the less feasible the planned measure becomes. If a measurement
requires a large amount of resources—for example, to directly observe clinicians at work or to
conduct manual chart review—or if you are observing very rare events, such as patient deaths,
your plan may not be feasible at all.

In planning how to study your measure, determine the number of observations you will need
to make. Generally, you need enough observations to feel confident about the conclusions you
want to draw from the data collected. If you have never estimated a sample size, you should
consult a statistician to help you do this correctly or utilize the resources on the AHRQ NRC
Web site. Appendix A offers a hypothetical example of determining sample size.

Estimate the number of observations you will need for each measure. You may find that this
exercise eliminates further measures from being feasible.

Health Information Technology Evaluation Toolkit: 2009 Update 12

XII. Ra n k Yo u r Ch o ic e s o n Bo th Im p o rta n c e An d Fe a s ib ility
Place your remaining measures into the appropriate box in the grid below.

Feasibility Scale

1-Feasible

2-Moderate Effort

3-Not Feasible

Im

po
rt

an
ce

S
ca

le

1-Very

Important

(1)

(2)

2-

Moderately
Important

(3)

(4)

3-Not

Important

(5)

Those measures that fall within the green zone (Most important, Most Feasible) are ones you

should definitely undertake; the measures in the yellow zones are ones you can undertake in the
order listed; and those measures in the red zone should be avoided.

Health Information Technology Evaluation Toolkit: 2009 Update 13

XIII. Ch o o s e th e Me a s u re s Yo u Wa n t To Eva lu a te
You now have a list of measures ranked by importance and feasibility. Narrow that list down to
four or five primary measures. If you want to evaluate other measures and you believe that you
will have the required resources available to you, list those as secondary measures.

XIV. De te rm in e Yo u r S tu d y De s ig n

Now that you know which measures you are going to unde