| Literature DB >> 23448630 |
Abstract
I recently purchased a laptop. The manufacturer claimed that its battery time was over 8 hours. However, when I started using the laptop, the battery never lasted that long. I called the customer care helpline. They told me that the figure of 8 hours was arrived at by using a very advanced and standardized software, which estimates the battery time under 'standard' conditions (for the uninitiated, this means putting the machine on at its lowest brightness and then not using it, except for low-end applications such as word processing). Now that was a problem. I hate carrying the chargers in my handbag. How do I know how long the battery will last under actual work conditions? So I started using the laptop as I would normally do, i.e. for word processing, making slides, connecting to the Internet, listening to music and occasionally watching movies. After about a week, I thought 5 hours was a fair estimate. Just to be sure, I also requested my son to use it for a week (you guessed it, for gaming), and he also thought 4-5 hours was a good estimate. Now when I travel, I do not carry my charger along if I estimate my computer use to be less than 4 hours. This incident got me thinking about the assessment of medical students. We are fond of objective and standardized tests, which are administered under standard test-taking conditions and in which the students are awarded certain grades. However, what happens when these doctors face a real-life situation? Are we incorrectly estimating the competences of our students in a controlled environment? Whether it is estimating the time of a laptop battery, the mileage of a new car or the competence of students, the issue seems to be the same-one-shot observation using standardized tools in artificial settings or long-term observation in real-life situations. Copyright 2012, NMJI.Mesh:
Year: 2012 PMID: 23448630
Source DB: PubMed Journal: Natl Med J India ISSN: 0970-258X Impact factor: 0.537