Sir,In the original article by Pahuja et al.[1] on the evaluation of an innovative digitally driven program to train primary care doctors (PCDs) of Uttarakhand to integrate psychiatric care in their daily practice, we find the conclusions to be too optimistic despite numerous issues in methodology. Moreover, the title too, we consider, can easily mislead the reader in assuming it to be a system-wide intervention, while only ten PCDs were involved for this training. A state such as Uttarakhand has a few thousand such PCDs. Moreover, these PCDs were not randomly selected but deputed by Uttarakhand's Government (who already had an interest in psychiatric care). Thus, the authors’ observation of “active participation and involvement of PCDs” can be partly due. Moreover, the assumed impact may be minimal.Before commenting on the study specifics, we regard public health module inclusion was a refreshing change in the training program pointed out previously as an essential shortcoming of the program.[2] We credit the investigators for this flexibility.Regarding the critical methodological issues, we would like to raise our concerns with both the outcome parameters of the training program, Primary Care Psychiatry Quotient (PCPQ) and Translational Quotient (TQ).PCPQ is the proportion of psychiatric cases identified by the PCDs among total general patients, and the study finds it to be 11% compared to a general rate of 17%–46% psychiatric cases in primary care consultation.[3] However, the latter figure does not suggest all psychiatric cases but only common mental disorders and does not include tobacco or alcohol addiction or psychosis. After eliminating the above condition, if we recalculate the PCPQ, it comes to 3.8%, far from a “good beginning” suggested by the authors.On the other hand, TQ was a directly observed rating to evaluate real-time general outpatient consultations at two-time points (6 and 9 months after initiation of training) to assess skills on six domains, namely elicitation of psychiatric symptoms, clinical reasoning, choosing appropriate psychotropics, etc. Such observation of actual clinical care has the potential to elucidate the quality of care. However, the program's evaluators opted for a tele-observation (Telepsychiatric On-Consultation Training [OCT] evaluation) rather than a direct in-person one and missed the opportunity to make a more nuanced rating. Understandably, it is convenient and saves resources, but despite it being a tele-training program, the evaluation was not claimed to be online, nor does it suit robustness of evaluation. Further, the observations on the said six criteria were rated on a 5-point Likert scale, yet the authors do not detail each score's criteria on individual items; presumably, the scoring was done subjectively by the tele-psychiatrist.Moreover, the authors do not report the number of different evaluators involved, questioning the inter-rater reliability. It is also not specified whether the tele-psychiatrist observed PCD's clinic for the entire day (or the number of observation hours). Rather than reporting the number of tele-OCT evaluations, the hour of evaluation would have been more informative. Moreover, the distribution of the 109 consultations evaluated (with 42.2% with psychiatric diagnosis) suggests that the patients were a selected group for evaluation, thus biasing the PCD toward psychiatric diagnosis. Such bias is also aggravated by the Hawthorne effect, whereby care provider alters their behavior because they know that they are being observed. As a result, we find that the TQ ranges across time have exactly remained the same (53.34%–86.67%), and could result from biased evaluation or ineffective training.