Literature DB >> 11936958

Selecting information technology for physicians' practices: a cross-sectional study.

Karen Beekman Eden1.   

Abstract

BACKGROUND: Many physicians are transitioning from paper to electronic formats for billing, scheduling, medical charts, communications, etc. The primary objective of this research was to identify the relationship (if any) between the software selection process and the office staff's perceptions of the software's impact on practice activities.
METHODS: A telephone survey was conducted with office representatives of 407 physician practices in Oregon who had purchased information technology. The respondents, usually office managers, answered scripted questions about their selection process and their perceptions of the software after implementation.
RESULTS: Multiple logistic regression revealed that software type, selection steps, and certain factors influencing the purchase were related to whether the respondents felt the software improved the scheduling and financial analysis practice activities. Specifically, practices that selected electronic medical record or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices. Practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than 10,000 dollars, or that provided learning time (most dramatic increase in odds ratio, 8.2) during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.
CONCLUSION: Perhaps one of the most important predictors of improvement was providing learning time during implementation, particularly when the software involves several practice activities. Despite this importance, less than half of the practices reported performing this step.

Entities:  

Mesh:

Year:  2002        PMID: 11936958      PMCID: PMC102764          DOI: 10.1186/1472-6947-2-4

Source DB:  PubMed          Journal:  BMC Med Inform Decis Mak        ISSN: 1472-6947            Impact factor:   2.796


Background

Health care providers compete for managed care contracts based on cost-effectiveness and quality of care [1-4]. Information technology (IT) provides a cost-effective way to document productivity, performance measures, cost, and quality of care. Since IT has dropped in cost over time, physician practices are now turning to it to meet these needs. Information technology for this study is defined as computer software used to store, transport, or communicate information [2,5-7]. The health care organizations that succeed in the 21st century will be those that improve quality and reduce cost. These juxtaposed objectives most likely will be reached through improved handling of information [2,8,9]. The Committee on Quality of Health Care in America reported that most clinical information remains in paper form [9]. This committee made several recommendations for improving quality, including moving clinical information to an electronic format by the end of the decade. Information technology selection in health care has often been performed in a rather informal way, resulting in the purchase of "white elephants" [10]. The systems may not perform as planned and may cause additional work for medical staff. The systems are often purchased or developed in pieces without consideration to the overall business strategy [1]. To date, few publications have documented the selection process and the resulting impact of the IT on the health care organization. Most papers give anecdotal descriptions, often by vendors, but lack client perceptions of the information system's value [1,2,7,11-14]. Even at the hospital level, only a few client perceptions of IT adoption have been reported [15-19]. The number of available papers that examine IT selections within physician practices is even smaller than those papers addressing hospital selections [3,20]. However, many physicians are transitioning from paper to electronic formats for billing records, medical charts, etc. This study aims to understand the process for selecting IT for physicians' practices and the perceptions of the IT after it is implemented. The primary objective of this research was to identify the relationship (if any) between the IT selection process and the office staff's perceptions of the it's impact on practice activities.

Methods

To address the research objective, a literature review was completed; an expert panel was formed and consulted; a conceptual model was developed; a telephone interview survey was designed; an exploratory factor analysis was performed; and finally, a logistics regression analysis was performed. The conceptual model for this study was not based on one single overriding pre-established theory (Figure 1). Rather, it was drawn from a body of literature as well as from the observations of an expert panel regarding technology selection and how it facilitates or impedes practice activities [1-3,11,12,16,21-42]. The expert panel included physicians, health services researchers, informatics researchers, and health care industry consultants.
Figure 1

Conceptual Model.

Conceptual Model. The telephone survey was conducted with 407 physician practices in Oregon [2]. The survey elements were based on the literature review and on the feedback from the expert panel. The survey addressed the following descriptive research questions: Q1: Who selects IT for a physician practice (e.g., administrators, clinicians, computer specialists)? Q2: What selection steps are used? Q3: What factors influence the purchase? Q4: Which IT features are selected? Q5: Who (within the practice) customizes the IT? Q6: Is time given to learn the IT? Q7: What are the clinical and office staff members' perceptions of this IT's impact on several office activities (e.g., scheduling, communication, quality reporting)? The design of the telephone survey was reviewed by the Human Subjects Research Review Committee at Portland State University.

Sample

Providence Health System in Portland, Oregon provided a database of practices (n = 933) for this study. These practices all served Providence Health System in some capacity – e.g., as primary care physicians or specialists. Eligible practices had acquired software within the past five years but not within the past six months. Practices with software older than five years were disqualified because it was unlikely that the decision makers (if present) would recall the details of the selection process. Practices with software selected within the last six months were dropped because new software often requires a learning time period. The original sample of 933 contained 70 practices that had no computers and 35 that had software purchased only in past six months or more than five years ago. In total, 11.1% of the original sample were excluded. Of the remaining eligible practices (n = 828), 407 completed the telephone survey, representing a response rate of 49.2%. If a qualified respondent at a practice was not reached after at least three attempts (n = 269) or the respondents declined the interview (n = 152), the practice was counted as a nonrespondent. Qualified respondents were involved with software selection or software customization for the practice. Seven practices gave partial interviews and were also counted as nonrespondents. These respondents had to leave in the middle of the interview to address urgent clinic needs. Although these respondents were rescheduled, they were not reached to complete the interviews. Additionally, one respondent gave many "don't know" responses. The interviewer wrote in the comment section for this office that the respondent was not qualified for the study and should be dropped. Thus, in total, seven partial interviews, and one unqualified interview were dropped from the sample, reducing the total number of offices in the study to 399. The respondents and participating practices are summarized in Table 1.
Table 1

Description of respondents and participating practices

Frequency (n = 399)
Role in practice
 Administrator/office manager, finance manager, etc.78.9%
 Billing or scheduling staff9.0%
 Physician, physician's assistant or nurse practitioner4.5%
 Other staff members3.8%
 Information system managers3.0%
 Nurses and medical assistants0.8%

Type of practice
 Various specialties55.7%
 Primary care32.6%
 Primary care and various specialties11.7%

Practice size
 Single practitioner46.3%
 2–10 practitioners41.3%
 More than 10 practitioners12.4%

Practice Ownership
 Private83.1%
 Health system owned16.9%
Description of respondents and participating practices Second interviews were gathered for 189 of the 407 responding practices. Since almost half of the responding offices represented single practitioners, many of these smaller offices had only one eligible participant.

Telephone survey

The survey questions were developed based on the literature review and discussions with an expert panel. Since many of the respondents were not familiar with technical IT terms, care was taken to present the survey in a "respondent friendly" format. Thirteen college student interviewers and two supervisors conducted the interviews using a telephone interviewing software package, Computer Assisted Survey Execution System. A program was written to provide the interviewers with precise dialogue, questions, and precoded responses. As the interview progressed, the interviewer entered the responses into a personal computer. Since the study objective included capturing the perceived impacts of IT, we attempted to record perceptions from two representatives from each practice: the decision maker and a primary user (see Additional File 1: "Physician Practice Software Telephone Survey, Dialog and Questions"). The initial interview that included questions related to the selection process and perceived impacts of the IT lasted approximately 15–25 minutes. The respondent was asked to describe a recent IT purchase (at least six months old). For each practice, the respondent indicated whether a person in a specific role – e.g., an administrator – was involved or not involved in selection, and involved or not involved in software customization. Customization in this study referred to providing input to the software vendor for writing software specific to the practice. During the interview we read the respondents a list of selection steps. For each step, the respondent answered "yes" or "no" as to whether it was performed. During the interview the respondents were read several potential factors that might have influenced the purchase. For each one they rated the statement on a 1-to-6 scale of importance, (ranging from "no importance" to "very high importance"). Finally, we asked the respondents to react to 12 statements describing potential impacts of the IT on selected practice activities. The statements were intentionally not grouped by any particular theme. The respondents rated each impact statement on a 1-to-5 scale of agreement ("strongly disagree", "slightly disagree", "neither agree or disagree", "slightly agree", "strongly agree") or selected "not applicable." The second interview with a primary user of the software included mainly the perceived impact questions, and lasted 7–10 minutes. At the completion of the initial interview, each respondent was offered a summary of the results.

Statistical evaluation

The data from all interviews were first descriptively evaluated, primarily by computing frequencies of responses for each question. Factor analysis (principal components) revealed four latent factors related to the respondent's perceived impacts of the IT on four practice activities: scheduling, financial analysis, communication, and medical documentation [2]. Therefore, four subscales were created. The scheduling, financial analysis, communication subscales each included two items, and the medical documentation subscale included three items. Responses of "not applicable" were coded as missing. For each subscale the mean of the items was computed. Diagnostic plots of the four practice activity subscales suggested that an explanatory model might be best approached using logistic regression, which relaxes the assumption of normality. The four subscales were recoded to dichotomous variables corresponding to agree or not agree. If the mean score (of 2–3 impact statements) for a practice activity was greater than 3.0, the respondent was scored as "1" for agree. If the mean score for a practice activity was 3.0 ("neither agree or disagree") or less, the respondent was scored as "0" for not agree. Each of the four practice activity subscales became the dependent variable in a predictive model. The independent variables entered into the models included the demographic and selection variables.

Multiple logistic regression

We attempted four predictive models, one for each of the newly created dichotomous subscales. Only respondents who found the impact statements relevant were included in the predictive models. Multiple logistic regression revealed relationships between the selection process and the perceptions related to the scheduling, financial analysis, and communication processes. Variables that achieved a significance level of p < .05 were retained in the models. For the perceptions related to medical documentation, no significant selection variables survived the analysis. This was most likely due to the small number of practices with electronic medical records (n = 89) and aggregating all types of electronic medical record (EMRs) regardless of type and number of functions. It is also possible that the decision to purchase an EMR is often made outside the practice – e.g., a large health system offers EMRs to the practices. For 11 of the 89 practices that had EMRs, the decision was made by a large health system. Data from these practices were not included in the predictive models, thus reducing the number of available practices with EMRs to 78. A summary of the models is presented in this paper. The complete analysis and models are available elsewhere [2]. The predictive models were built using a model building data set (299 randomly selected interviews). The models were then tested with a testing data set (the remaining 100 interviews). One-hundred interviews were needed to insure adequate statistical power. As a check for cross-validation, the accuracy with which the models predicted the perceived impact subscale values using the model building data set was compared to the accuracy achieved with the testing data set. Using the parameters established with the model building data set, agreement (or not agreement) to a perceived impact subscale was predicted for the testing data set. For cross-validation, the accuracy levels were compared using a z-test for proportions. As seen in Table 2, the scheduling and financial analysis models had non-significant (p > .05) drops in accuracy. This suggests that the models may be generalized to other physician offices with similar demographics. Since the accuracy level dropped dramatically for the communication model, this model did not "cross-validate." The observations made in this study accurately describe the idiosyncrasies of this sample used to build the communication model, but may not accurately describe other samples of physician offices.
Table 2

Cross-Validation Summary

Practice ActivityModel Building Data AccuracyTesting Data Accuracyp
Scheduling73%(n = 136)65%(n = 43).437
Financial Analysis86%(n = 166)73%(n = 56).059
Communication90%(n = 89)66%(n = 35).003
Medical Documentation
Cross-Validation Summary Once the results were completed, the expert panel was reconvened to provide insight in interpreting the results. In the sections that follow, the descriptive results, comparison of the decision maker vs user, and each cross-validated model are summarized and discussed.

Results and Discussion

Most administrators were involved in the selection (68%) and customization (63%) processes (Table 3). Clinical staff members were also very involved in selection (62%) but not as involved in customization (33%).
Table 3

Selection process

Who selects (Q1)? And Who customizes (Q5)? At least 1 of the followingSelection Frequency (n = 399)Customization Frequency (n = 399)
Administrators: office manager, financial manager, or medical director68%63%
Clinical staff members: a physician, physician's assistant, nurse practitioner, nurse or medical technicians62%33%
Computer consultant from outside the practice48%39%
Office staff members: billing clerk, scheduler, receptionist, or secretary42%42%
Representative from: health system, insurance company or patients18%18%
Computer specialist within the practice17%13%

What selection steps are used (Q2)?Frequency (n = 399)

Performed cost comparisons85%
Viewed software demonstration81%
Issued a RFP (Request For Proposal) or RFI79%
Compared software options with the best in the field78%
Conducted prior user interviews76%
Performed a needs assessment75%
Developed selection criteria73%
Reviewed your long term business plan60%
Made a site visit47%
Developed a decision analysis35%
Formed a selection committee21%

What factors influence the purchase (Q3)?Influenzing StatementsFrequency (n = 399) Rated "high or very high importance"

The software appeared easy to use.80%
Software appeared to improve one or more of the business processes in the practice process.79%
The software provided the most value for cost.73%
The software would help the practice perform processes needed to reach our long term business strategy.66%
The vendor had many sites and was responsive to our needs during the selection process.55%
There were strong testimonies from prior users.47%
The software was already in use by other sites affiliated with this practice.41%
Software was compatible with existing practice systems in the practice.36%
Selection process Eighty percent or more of the practices performed cost comparisons and/or viewed software demonstrations. The frequencies for the steps the practices took in selecting software are depicted in descending order in Table 3. Seventy percent or more of the practices stated that "ease of use," "improving a business process," and "most value for cost" were important factors influencing the purchase (Table 3). The frequency of factors receiving either "high" or "very high importance" is also presented in descending order in Table 3. The practices typically chose commercial packages that cost less than $50,000. (Note: these data were collected during the fall, 1996). Information related to IT cost, customization level, and the number of users is presented in Table 4. There were four basic software packages considered in this study. The type of package, associated computer activities, and frequencies are presented in Table 4. The results indicate that more than 85% of the practices used the software for managed care or practice management activities. Fewer than half of the practices used the software for communication activities. Only 23% of the practices accessed a completed patient record with the software.
Table 4

Selected IT

Details of ITFrequency (n = 399)
Software cost
 Given to the practice ($0)6%
 Less than $10,00048%
 $10–50,00038%
 $More than $50,08%

Customization level
 Commercial package (no customization)49%
 Commercial package + customization42%
 Completely custom package9%

Number of users
 Only 1 user23%
 2–5 users44%
 More than five users8%

Software CategoryComputer Activities Which IT features are selected (Q4)?Frequency (n = 399)

Electronic Medical RecordAccess and complete patient records using computerized patient records23%

Managed CareTrack incoming and outgoing referrals48%
Track patient enrollment44%
Capitation accounting32%
Query database38%
Statistical reporting on utilization and outcomes46%
Follow clinical guidelines24%
At least one managed care activity85%

CommunicationEmail or telemedicine to external colleagues15%
Email within the practice20%
Remote link with other information systems17%
Access to internet8%
Electronic data interchange (EDI)14%
Online literature searches8%
At least one communication activity45%

Practice ManagementBilling and collections78%
Appointment scheduling50%
Accounting spreadsheets51%
At least one practice management activity92%
Selected IT Ninety percent of the respondents felt the software had impacted their billing process (Table 5). The first column in Table 5 lists the theme of the impact statement. The middle column is the proportion of respondents who rated the software – meaning the impact statement was relevant to their software. For those who found the impact statement relevant, the last column depicts the proportion who slightly or strongly agreed with the impact statement. For example, in Table 5, 74% of the respondents felt the software affected the accuracy of their practice documents. Of those, 85% of the respondents agreed that practice documents were more accurate since the software was implemented.
Table 5

What are the clinical and office staffs members perceptions of this it's impact on office activities (Q7)?

Impact Themes"Relevant Proportion (n = 399)For "relevant" responders only, the "agreed" proportion
Improved billing process90%89%
More accurate documents74%85%
Improved ability to analyze managed care costs65%85%
Improved scheduling process58%76%
Improved access to patient information at multiple sites55%83%
Reduced malpractice costs50%53%
Improved referral process50%68%
Reduced time for recording patient information47%77%
Improved communication44%76%
Improved documented quality38%78%
Quicker lab results19%60%
Access to more journals15%38%
What are the clinical and office staffs members perceptions of this it's impact on office activities (Q7)?

Comparison of decision-maker vs user

The primary respondents agreed with users on their perceptions of the software's impact on scheduling and financial analysis activities (p < .001). For the scheduling model, Phi was .359, with a maximal Phi of .778. For the financial analysis model, Phi was .418 with a maximal Phi of .920. Since the primary respondent was reasonably knowledgeable about the perceived impacts of the software, we did not include the user data in the remainder of the cross-validated models. The user provided only a few demographics and the perceived impact data, while the primary respondent provided the selection data as well as the perceived impact data.

Predicting the impact of the software on scheduling activities

For the scheduling model, five selection variables as a group predicted with 73% accuracy the subscale of whether the respondents on average would agree with the following two impact statements: "The software has improved the scheduling of patients for routine, preventive and urgent appointments." "The software has improved the referral process in sending and receiving referrals quickly." The statistically significant (p < .05) predictors are presented in Table 6 along with the expected response by the respondent and the results of the multiple logistic regression analysis. The second column of the table contains the coefficient (or weighting value of B). The Wald statistic (Bj/standard error) gives a measure of significance of B for the predictor variable.
Table 6

Scheduling Model

PredictorCoeff. (Bj)Wald StatisticpOdds RatioRespondent's Reaction to Scheduling Statements
Software with electronic medical record features.1.363.910.04813.89More likely to agree
The practice compared software options with the best in the field.1.365.940.01483.88More likely to agree
Software with practice management features.0.533.340.06761.70More likely to agree
Importance of prior user testimony0.334.740.02951.39More likely to agree
The respondent personally selected the software.-1.618.220.00410.20Less likely to agree
Scheduling Model Looking at the odds ratios in Table 6, the likelihood of agreement with the scheduling subscale is almost four times (odds ratio, OR = 3.89) as great when practices selected EMR packages than if they did not select EMR packages. At first this finding was surprising. Many EMRs, however, have automatic recall features when the patient should be called or sent a reminder for a health check. Similarly, the likelihood of agreement was almost four times (OR = 3.88) as great when the practice compared the software options with the best in the field than if it did not perform this step. The practices that selected practice management software were 1.70 times more likely to agree that the software had improved the scheduling and referring of patients than practices that selected other types of software. This finding was expected since these packages typically include a scheduling module. Additionally, practices that considered "prior user testimony" important in the selection process were 1.39 times more likely to agree with the scheduling subscale than those practices that did not consider prior user testimony as an important influence. Finally, a respondent who had personally selected the software was less likely to agree with the impact statements (OR = 0.20). The members of the expert panel felt this was a symptom of "unmet expectations." The members of the selection team knew how the software was supposed to perform and were likely disappointed when it didn't live up to the vendor promises. These respondents had also probably seen the "Cadillac" performers and realized that their software had only achieved "Chevrolet" status. Another explanation is that these practices failed to fully implement the software or to adapt clinic workflows to fully utilize the software. In summary, practices that selected EMR or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices.

Predicting the impact of the software on financial analysis activities

For the financial analysis model, five selection variables as a group predicted with 86% accuracy the subscale of whether the respondents on average would agree with the following two impact statements: "The software has created a more accurate and timely billing process." "The software has improved the practice's ability to track and analyze costs and revenues associated with managed care contracts." The most dramatic increase in odds of agreement (OR = 8.2) occurred when the practice reduced the workload to allow time to learn the software, Table 7. However, only 36% of the 399 practices reported that reduced workloads were provided during the implementation phase. According to the survey conducted by Ambosa et al. [21], expecting medical staff to learn new software while caring for a full load of patients is a common reason for failure.
Table 7

Financial Analysis Model

Financial Analysis Model PredictorCoeff. (Bj)Wald StatisticpOdds RatioRespondent's Reaction to Financial Analysis Statements
Time to learn (reduced workload to learn the software).2.15.440.01978.20More likely to agree
Software with managed care features.1.527.740.00544.59More likely to agree
Importance of "value for cost' purchase influence.0.698.040.00462.00More likely to agree
Importance of compatibility purchase influence.-0.415.890.01520.66Less likely to agree
The cost of the software.-1.46.740.00940.25Less likely to agree
Financial Analysis Model The odds of agreement were increased by more than a factor of four (OR = 4.59) for each increase in managed care activities the software contained. Since most managed care software packages are marketed to assist the practice in documenting costs associated with managed care contracts, this finding was expected. Practices that considered value an important consideration were twice (OR = 2.0) as likely to agree with the financial analysis subscale. By contrast, practices that considered compatibility an important influence were less likely (OR = 0.66) to agree with financial analysis subscale. At first the compatibility result was surprising. However, 51% of these practices were first-time buyers, and usually buying billing software, so compatibility was not a critical consideration. Ninety-one percent of first-time buyers who rated compatibility as low-to-no importance agreed with the financial analysis subscale. It is also possible that practices with existing good financial analysis processes (and little room to improve) rated compatibility as important but disagreed that the new software had improved the existing good process. The finding that less expensive packages related to more satisfied buyers was interesting (OR = 0.25). There were many good financial packages available for less than $10,000 in 1996. Practices that spent less than $10,000 bought software packages with few, but very functional, features. Those practices that spent more than $10,000 were purchasing complex systems, perhaps for multiple sites. Financial analysis may just have been a small module of these multi-purpose packages. In summary, practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than $10,000, or that provided learning time during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.

Observations from both models

In looking over the predictors for the two cross-validated models (scheduling and financial analysis), some predictors naturally belong in one model or the other – e.g., practice management software in the scheduling model and managed care software in the financial analysis model. The themes in the scheduling model center around software features (emr and practice management software, comparison of software options) and usability (prior user testimony and personal selection by respondent). The themes in the financial analysis model include cost (software cost, value), software features (managed care software and compatibility), and learning time. This might suggest that the respondents for the financial analysis model had differing roles in the practice than the respondents for the scheduling model. In both of these models, 79% of the respondents were administrators. Since all types of administrators (e.g., office managers, finance managers) were grouped together, it was impossible to identify the primary role of administrator who responded. The differences in the models also suggest that the predictors of success differ by the types of activities the software is intended to perform. It might appear odd that some predictors (e.g., learning time) did not carry through to both models. It is likely that the type and complexity of software package contributed to the learning demands on the office. Many of the respondents who agreed with the financial analysis subscale chose managed care software that bundled together many activities (tracking incoming and outgoing referrals, patient enrollment, capitation accounting, and/or utilization reporting). For practices learning this type of software, protected learning time was an important predictor of success. For practices implementing practice management software (scheduling, billing, and/or accounting spreadsheets), the learning demand was less. This naturally suggests that the decision to reduce the workload while learning a software package should consider the number and complexity of the tasks to be learned.

Limitations and research opportunities

The respondents for this study primarily represented practices that serve Providence Health System in Oregon. These practices served either as managed care providers or as fee-for-service providers. The only practices excluded were pure HMO providers – e.g., Kaiser Permanente. The pure HMO practices were excluded because it was unclear whom to interview regarding software selections. Often these practices are given software directly from the organization. Eighty-seven percent of these practices in this study had 10 practitioners or less. Only 17% of these practices had in-house computer specialists assisting with software selection. The results of this study may not generalize to large practices that often have in-house computer specialists assisting with selection. A future study could include a nationwide survey of all types of physician practices, regardless of managed care status, ownership, specialty, or size. This study is retrospective in nature, requiring the respondents to recall a software purchase that occurred several months, perhaps more than a year, earlier. In an "ideal study design," a questionnaire should be distributed to practices that have recently made selections. Another questionnaire addressing the impact on the practice could be sent at a pre-defined follow-up period – e.g., six months after implementation. This "ideal study design" would be difficult to conduct without a sufficient list of practices that have recently purchased software. Perhaps software manufacturers and vendors could provide lists of recent clients (with permission) to interested researchers. The cross-sectional survey design of this study captured the technical aspects of the selection process (e.g., who was involved, what steps that were taken). Although the respondents were given a few "open-ended" questions, most provided little additional information. There could have been additional selection steps, influences, and impacts. It is also possible that the observed changes in impact were related to variables we didn't attempt to measure – e.g., ability and desire of management to implement new technologies and to change existing practice activities. Focus groups might be more effective at capturing underlying management expertise. Another very time-invasive approach would be to conduct a series of case studies, documenting the decision-making process over time. This research would need support from practices for observers to remain on-site during the selection process. This format would also promote a more well-rounded, multiple perspectives evaluation. The current study relies on perceptive responses (primarily from office managers) to measure many variables, including impact variables. Their perceptions were related to business-related practice activities. Only 5.3% of the respondents were clinicians. It is likely that expanding this study to include more clinician responses would reveal perceptions related to other processes – e.g., medical documentation or treatment processes. The subscales (related to practice activities) were formed from responses to only two to three original impact questions. A stronger design of these practice activities impacts would include several questions related to each activity. Given the exploratory nature of this current research, this limitation could not have been foreseen. However, the results of this study open doors for more confirmatory types of studies to design survey instruments that measure software impact with underlying practice activity constructs. This study does not attempt to demonstrate cause and effect. It would be important to have respondents rate existing practice activities (before purchasing software) to control for a "ceiling effect" – practices with existing good processes have little room to improve. If such a trial were designed, it would also need to control for the type of IT and the needs of the buyer. To move toward a more direct measure of impact would require the practices to closely measure performance and behavior. For example, in this study, the respondent is asked if the practitioners have an improved ability to consult professional literature online. A direct measurement method would determine the number of online literature consultations before and after the software installation.

Conclusions

The results of this research describe the software selection process as it occurs in physician practices. Using a telephone interview survey gave the researcher (and other interviewers) direct contact with the decision makers in each practice. The results of this study also describe how software is perceived to affect several practice activities. The objective of this study was to identify relationships (if any) between the IT selection process and the office staff's perceptions of the IT's impact on practice activities. The results of the multiple logistic regression models confirmed relationships between the selection process and the perceived impacts related to the scheduling and financial analysis activities. The results of this study demonstrated a relationship (not cause and effect) between the selection process and the user perception of software usefulness. Although many of the relationships were expected (e.g., performing software comparisons, interviewing prior users, and selecting certain software features improved perceptions about practice activities), perhaps one of the most important predictors of improvement was reducing the workload during implementation. Despite the importance of this predictor, only 36% of the practices performed this step in this study. If more practices had performed this step, it might have carried even more weight in the analysis. From a practical standpoint, many of the offices selected and implemented IT but expected the staff to learn the software while caring for a full load of patients. Investigators from a previous study by Ambroso et al. [21] cite this expectation as a common reason for IT failure. One of the secondary findings of this research is that the purchasers of the software (often office managers) had perceptions about the software's use similar to those of users (who were not involved in the selection process). This finding supports the use of a single-survey-response study design for understanding perceived impacts related to software's impacts on business-related practice activities.

List of Abbreviations

EMR: Electronic Medical Record IT: Information Technology OR: Odds Ratio

Competing Interests

None Declared.

Author comments on prior presentation of results

The results of this study were presented at the Portland International Conference on Management of Engineering and Technology, Portland Oregon, 1997 and 1999. The results were also presented at the Institute for Operations Research and Management Science, Philadelphia, Pennsylvania, 1999. The references for the conference proceedings are listed below. Eden K, Kocoaglu, D. Information Technology Selection Process and Perceived Impacts in Physician Practices. In Technology and Innovation Management. Portland State University, PICMET conference proceedings, 1999, pp. 562–568. Executive summary presented in proceedings, Portland International Conference on Management of Engineering and Technology, Portland, Oregon, 1999, pp. 392–394. Eden K, Kocaoglu, D. Selection of Information Technology in the Health Care Industry. Presented at Institute for Operations Research and the Management Sciences conference. Philadelphia, Pennsylvania, November, 1999. Eden K, Kocaoglu D. Selection and Implementation of Information Technology in the Health Care Industry. Preliminary results presented at the Portland International Conference on Management of Engineering and Technology, published in proceedings, Portland, Oregon, 1997, pp. 199–202.

Pre-publication history

The pre-publication history for this paper can be accessed here:

Additional file 1

Scripted telephone survey, "Physician Practice Software Telephone Survey, Dialog and Questions", by K.B. Eden, The file contains the script, questions and pre-coded responses, variables names (in left margins, that appear as: >xxxx<), and several logical statements (e.g., goto, if, etc.) to lead the interviewer through the interview. Click here for file
  28 in total

Review 1.  IAIMS at Columbia-Presbyterian Medical Center: accomplishments and challenges.

Authors:  N K Roderer; P D Clayton
Journal:  Bull Med Libr Assoc       Date:  1992-07

2.  INFORM: European survey of computers in intensive care units.

Authors:  C Ambroso; C Bowes; M C Chambrin; K Gilhooly; C Green; A Kari; R Logie; G Marraro; M Mereu; P Rembold
Journal:  Int J Clin Monit Comput       Date:  1992

3.  Clinical information systems vs. practicing physicians.

Authors:  R L Simpson
Journal:  Nurs Manage       Date:  1992-12

4.  Physicians' resistance to claims automation.

Authors:  T R Anderson
Journal:  J Health Care Benefits       Date:  1993 Nov-Dec

Review 5.  What can information systems do for primary health care? An international perspective.

Authors:  P Sandiford; H Annett; R Cibulskis
Journal:  Soc Sci Med       Date:  1992-05       Impact factor: 4.634

Review 6.  Computer Rx: more harm than good?

Authors:  R Wall
Journal:  J Med Syst       Date:  1991-12       Impact factor: 4.460

7.  Evaluating and selecting an information system, Part 1.

Authors:  T Neal
Journal:  Am J Hosp Pharm       Date:  1993-01

8.  A clinical laboratory information systems survey. A challenge for the decade.

Authors:  F Elevitch; C Treling; K Spackman; M Weilert; R Aller; M Skinner; O Pasia
Journal:  Arch Pathol Lab Med       Date:  1993-01       Impact factor: 5.534

Review 9.  Information technology applications in quality assurance and quality improvement, Part I.

Authors:  D B Aronow; K L Coltin
Journal:  Jt Comm J Qual Improv       Date:  1993-09

10.  Impact on the management and delivery of primary health care by a computer-based information system.

Authors:  A K Singh; K Moidu; E Trell; O Wigertz
Journal:  Comput Methods Programs Biomed       Date:  1992-02       Impact factor: 5.428

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.