Literature DB >> 32382541

Improved Landmark Dynamic Prediction Model to Assess Cardiovascular Disease Risk in On-Treatment Blood Pressure Patients: A Simulation Study and Post Hoc Analysis on SPRINT Data.

Mehrab Sayadi1,2, Najaf Zare3, Armin Attar4, Seyyed Mohammad Taghi Ayatollahi2.   

Abstract

Landmark model (LM) is a dynamic prediction model that uses a longitudinal biomarker in time-to-event data to make prognosis prediction. This study was designed to improve this model and to apply it to assess the cardiovascular risk in on-treatment blood pressure patients. A frailty parameter was used in LM, landmark frailty model (LFM), to account the frailty of the patients and measure the correlation between different landmarks. The proposed model was compared with LM in different scenarios respecting data missing status, sample size (100, 200, and 400), landmarks (6, 12, 24, and 48), and failure percentage (30, 50, and 100%). Bias of parameter estimation and mean square error as well as deviance statistic between models were compared. Additionally, discrimination and calibration capability as the goodness of fit of the model were evaluated using dynamic concordance index (DCI), dynamic prediction error (DPE), and dynamic relative prediction error (DRPE). The proposed model was performed on blood pressure data, obtained from systolic blood pressure intervention trial (SPRINT), in order to calculate the cardiovascular risk. Dynpred, coxme, and coxphw packages in the R.3.4.3 software were used. It was proved that our proposed model, LFM, had a better performance than LM. Parameter estimation in LFM was closer to true values in comparison to that in LM. Deviance statistic showed that there was a statistically significant difference between the two models. In the landmark numbers 6, 12, and 24, the LFM had a higher DCI over time and the three landmarks showed better performance in discrimination. Both DPE and DRPE in LFM were lower in comparison to those in LM over time. It was indicated that LFM had better calibration in comparison to its peer. Moreover, real data showed that the structure of prognostic process was predicted better in LFM than in LM. Accordingly, it is recommended to use the LFM model for assessing cardiovascular risk due to its better performance.
Copyright © 2020 Mehrab Sayadi et al.

Entities:  

Mesh:

Year:  2020        PMID: 32382541      PMCID: PMC7195630          DOI: 10.1155/2020/2905167

Source DB:  PubMed          Journal:  Biomed Res Int            Impact factor:   3.411


1. Introduction

The risk prediction models (RPMs) are used as a diagnostic model to estimate the probability of an event occurrence in a disease or as a prognostic model to estimate probable consequences of a disease. Accurate prediction of a risk is essential in clinical research, and it is the patient's right to be informed about their disease progress [1]. Recently, RPMs are being used to help clinicians to make the best decision in diagnostic and therapeutic approaches, based on patient's demographics, test results, or disease characteristics [2]. Diagnostic models are usually used for risk classification in patients while prognostic models use time to assess disease progress [3, 4]. Nowadays, more prediction models have been used in cardiovascular diseases, such as diagnostic models for assessing the risk factors [3]. However, the cardiovascular risk assessment tools are static prediction models that use baseline predictors, but they still have some shortcomings, such as poor prediction [5], for instance, the inability to determine long-term survival of a heart attack patient with previous successful treatment or the inability to decrease the risk of cardiovascular event in a treated hypertensive patient. During the intervention, biomarkers are measured that are potentially informative in order to determine the treatment efficacy [6-9]. In this respect, the risk prediction using longitudinal biomarker is referred to as the dynamic prediction model (DPM), which was introduced by some researchers [10-13]. One DPM model is joint modeling (JM) [14-16], which requires correct determination of biomarker process distribution and event time, but this biomarker distribution is usually unclear. Moreover, its generalization to more than one marker leads to the production of ample computational complexity [17]. The landmark model (LM), another DPM, is used as an appropriate alternative to JM [9, 17–19]. The main advantage of LM is its simplicity, since it requires fewer assumptions compared to JM and might have much more power. In LM, the time is divided into different landmarks and then the simple Cox proportional hazards (PH) model is applied to each landmark for individuals who are still alive until time t [20, 21]. On the other hand, the biomarker value in each landmark time is considered as a fixed variable; hence, the prediction of risks becomes feasible. A landmark window should be considered to predict survival until time sl + w, which is called t horizon (thor). w is the length of time to predict patient survival as the prediction window. By analyzing LM, the less frail patients are probably maintained dynamically during the landmark times. On the other hand, the estimated parameters in LM can be affected, if some patients do not follow the specific clinical visit schedule. Also, not considering the correlation between landmarks might affect the risk prediction. Bias in LM probably originates from these neglected issues. In order to improve LM, the frailty parameter was used to present a new model called the landmark frailty model (LFM). Finally, LFM was used to assess the cardiovascular risk in the on-treatment hypertensive patients. To reach this goal, a study with simulation data was designed, and the real blood pressure data which was obtained from systolic blood pressure intervention trial (SPRINT) was analyzed [22, 23]. The rest of this article is organized as follows. Section 2 provides a brief description of the landmark approach as well as the proposed approach. Also, the setting of simulation studies, goodness of fit indices, and real data description are shown in Section 2. We conducted simulation studies to compare LFM with LM in Sections 3. In Section 4, we exhibited our approach with the SPRINT data followed by Section 5, which concludes and discusses simulation and real data.

2. Materials and Methods

2.1. Landmark Approach

Assuming that Ti and Ci are survival time (failure time) and censoring time, then Ti∗ = min(Ti, Ci) make the observed time. X (.) represents the vector of covariates, which can be measured once at the beginning of the study. For example, age and gender are measured only at baseline and they are considered as fixed variables. Y (.) represents the longitudinal biomarker like systolic or diastolic blood pressure which can be measured for several time intervals. For risk assessment, the Cox (PH) model as the most famous model is defined as where h(t) and h0(t) are hazard function at time t and baseline hazard, respectively. In LM, the time is divided into several landmark times including s1, s2, ⋯, s. At landmark l (l = 1, 2, ⋯, K), subjects who are still at risk are considered for analysis and the remaining individuals will be omitted [9]. At each landmark, longitudinal biomarker value Y(s) is considered as a fixed variable. Then, a time period, a landmark window is considered to predict survival until time sl + w which is called t horizon (thor). w is the length of time to predict patient survival as prediction window, which is the so-called 3 or 5 years. The Cox PH model in equation (1) is reformulated and the conditional hazard function is estimated by The model presented in equation (2) is defined as the simple or basic LM. It is used to fit a model to each landmark, and it estimates the specific landmark effect of a biomarker for predicting survival between s and tthor where h is a different baseline hazard in each landmark. We assumed that a longitudinal biomarker, Y(t), for subject i at the time of j was obtained from the mixed-effect model via the following formula: where Z and  X denote the design vector for random and fixed effect and subscript g is 0 or 1. According to equation (2), to consider the frailty of patients and the correlation between sequential measurements, LFM is defined as follows: where u indicates the frailty of patient i, which follows the multivariate normal distribution with mean 0 and covariance matrix Σ(θ). The survival prediction model is related to cumulative hazard function by To estimate the parameters, Cox partial likelihood is modified via the following integrated (over landmarks) partial log-likelihood (IPL) [24]. In this formula, d indicates the risk set, d = 1 if subject i remains until time s at landmark l. Otherwise, d is assumed 0. R(s) denotes the risk set at time s at landmark l. We used the integrated partial likelihood (IPL∗) by integrating the random effects [25]. By maximizing the IPL∗, maximum likelihood estimators (MLE) for the parameters are provided. In addition, the coxme function in coxme package can only provide an ML estimate. With respect to the complexity of IPL∗ calculation, the coxme package uses the Laplace approximation technique. More details are described elsewhere [25, 26]. We can also perform a model for all landmarks by stacking data set defined as super LFM, which considers parameters as if they depended on the time in a smooth fashion. It is formulated as where Dynpred, coxme, and coxphw packages in the R.3.4.3 software were used for data analysis.

2.2. Simulation Study Setting

To assess the application of the models in different aspects, we set up several scenarios with different specificities in terms of sample size (n = 100, 200, and 400), number of landmarks (6, 12, 24, and 48), failure rate (30, 50, and 100%), and complete/missing data. In order to perform these models, a dichotomous variable with binomial distribution like treatment effect and continuous covariate with normal distribution like age (X1 and X2, respectively) were considered. The regression coefficient, β1 and β2, was set at 0.5 and 1.5 as true values, respectively, for X1 and X2. In equation (3), we assume that b has a bivariate normal distribution for random intercept and slope with a mean of 0 and covariance matrix of δ11 = 2, δ12 = 0.2, δ22 = 1. We also assumed that the individual error term (ε) follows a normal distribution with a mean of 0 and variance of 1. Moreover, it is assumed that continuous variable Y was measured for 10 times for each individual sequentially. The time T was generated from Weibull distribution [27] as shown in the following: In this equation, k = 1.1, γ = 0.4, λ = 0.01, φ = 0.75, and v has a uniform (0, 1) distribution. Moreover, we performed the simple Cox model that just included the baseline data in three different sample sizes and three different failure percentages.

2.3. Goodness of Fit (GOF) and Prediction Ability Indices

There are several indices to assess the goodness of fit (GOF) and prediction ability in DPM. We used the standard error, bias of parameter estimation, and the mean square error (MSE), which were obtained from 300 simulation data. To compare LFM with LM, log-likelihood as well as deviance statistic was used. The latter is compared with mixture chi-square value (1.92) that was obtained from  (1/2)(χ02 + χ12). Akaike information criterion (AIC) was used as if smaller AIC implies a better fit. Moreover, the dynamic concordance index (DCI), dynamic prediction error (DPE), and dynamic relative prediction error (DRPE) were used to measure the discrimination and calibration ability. DPE was obtained from the Brier error score formula: In fact, the Brier score measures the average discrepancies between true event status and predictive values of survival at time t. Low Brier score of a model indicates the better predictive performance of that model. In this formula, d as the actual observation for subject i at time t is an event status, which could be either 1 or 0 (the occurrence or nonoccurrence of an event, respectively). The predicted survival is estimated by model LM or LFM [28]. DRPE was calculated from In equation (12), errors are obtained from equation (11) and the null model is a model without any covariates, such as Kaplan-Meier estimate, and the current model is LM or LFM.

2.4. Real Data

We used a part of the systolic blood pressure intervention trial (SPRINT) study [29] (National Heart, Lung, and Blood Institute (NHLBI), funded by the National Institutes of Health; ClinicalTrials.gov number NCT01206062) upon a request ID of 4612. In the main study of SPRINT, methods were reported in detail [30]. In summary, in that randomized controlled trial study, 9361 nondiabetic participants with systolic blood pressure (SBP) of equal or more than130 mmHg were allocated to an intensive treatment (target SBP < 120 mmHg) and standard treatment (target SBP < 140 mmHg) groups. Baseline data, lab data, and repeated measurement of SBP for 21 times in 5 years were collected. Heart failure, stroke, myocardial infarction, other acute coronary syndromes, and death from cardiovascular causes were regarded as cardiovascular events. Hence, we designed a case-cohort study from this data, which included Framingham risk factors of age, gender, total cholesterol (TCH) level, high-density lipoprotein cholesterol (HDL) level, and SBP. In our model, treatment effect was added to the abovementioned data. We considered 10 measurements of SBP (baseline, 6, 12, 18, 24, 30, 36, 42, 48, and 54 months). The aim was to determine the dynamic effect of blood pressure on cardiovascular disease risk by comparing LFM with LM. On the other hand, these two models were compared with the simple Cox model by considering only baseline blood pressure data. To compare LFM with LM, we used AIC and deviance criteria. And the deviance criteria were tested using mixture chi-square.

3. Results of Simulation

Simple Cox model results are summarized in Table 1, and results of LM and LFM are summarized in Tables 2–4
Table 1

Simple Cox model results in different simulations.

n EstimateFailure = 30%Failure = 50%Failure = 100%
β 1 = 0.5 β 2 = 1.5 β 1 = 0.5 β 2 = 1.5 β 1 = 0.5 β 2 = 1.5
100Mean0.3841.2900.3841.2860.4011.103
SE0.5130.3680.5430.3920.3180.226
Bias-0.116-0.210-0.115-0.213-0.099-0.397
MSE0.6770.4640.3770.3040.1580.128

200Mean0.4021.3270.4171.2980.4201.285
SE0.7500.5490.2460.1740.2520.180
Bias-0.098-0.173-0.082-0.202-0.08-0.215
MSE0.7200.6920.0650.0840.0800.097

400Mean0.4051.2870.4381.3080.4311.300
SE0.3200.2260.2520.1800.1750.123
Bias-0.095-0.213-0.062-0.191-0.069-0.200
MSE0.1060.3260.3680.1250.0360.059
Table 2

LM and LFM results in different scenarios when failure rate is 30%.

L n EstimateData without missingData with missing
LMLFM D LMLFM D
β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC
12100Mean0.4181.4066190.4221.4296141.760.4081.4212440.4351.4222411.11
SE0.2240.1820.2190.1800.3390.2740.3290.280
Bias-0.081-0.093-0.077-0.070-0.091-0.078-0.065-0.078
MSE0.2010.1380.2100.1400.3150.6800.2700.177
200Mean0.4591.40515210.4651.42015152.390.4581.4026240.4661.4226211.01
SE0.1490.1250.1500.1230.2170.1800.2200.183
Bias-0.040-0.095-0.034-0.079-0.042-0.097-0.033-0.077
MSE0.087-0.0640.0860.0630.1070.3280.1100.323
400Mean0.4631.40036460.4681.41536364.050.4581.41115370.4641.43015321.79
SE0.1030.0860.1040.0850.1470.1220.1500.124
Bias-0.036-0.100-0.032-0.085-0.041-0.089-0.035-0.070
MSE0.0390.0390.0400.0380.053-0.0490.0540.048

24100Mean0.4191.40512330.4491.509116736.180.4311.4214810.5041.53643325.41
SE0.1710.1440.1550.1280.2320.1940.2850.203
Bias-0.081-0.094-0.0510.009-0.069-0.0780.0040.036
MSE0.1990.1390.2340.1250.2650.1740.4140.306
200Mean0.4601.40030480.4951.500291271.260.4611.39912460.5391.590114352.59
SE0.1050.0240.1150.0200.1520.1270.1810.152
Bias-0.039-0.100-0.0050.000-0.038-0.1010.0380.059
MSE0.0900.0660.0980.0640.1100.0860.0960.120
400Mean0.4631.40178720.4981.5007011144.430.4561.41230640.5131.5792854107.85
SE0.0790.0600.0720.0200.1040.0860.1220.102
Bias-0.037-0.099-0.0020.000-0.044-0.0880.0130.079
MSE0.0390.0390.0400.0360.0520.0480.0690.074

40100Mean0.4211.40918080.4621.545166490.280.4311.4207040.5341.55859169.02
SE0.1270.1060.1450.1230.1910.1590.2510.213
Bias-0.078-0.090-0.0380.045-0.069-0.0790.0340.058
MSE0.2000.1370.2460.1600.261-0.1740.4690.395
200Mean0.4601.39944780.5041.5444178185.160.4601.40118290.5561.5641596142.63
SE0.0970.0830.0860.0720.1250.1050.1550.132
Bias-0.040-0.1010.0040.044-0.040-0.0990.0560.064
MSE0.1050.0740.0910.064-0.1110.0800.1680.161
400Mean0.4651.403107050.5131.54110106371.920.4581.41345010.5381.5634022291.24
SE0.0600.0500.0670.0500.0860.0720.1050.088
Bias-0.035-0.0960.0130.041-0.042-0.0680.0380.063
MSE0.0480.0400.0400.0400.0540.0480.0780.099

L = landmarks; n = sample size; LM = landmark model; LFM = landmark frailty model; AIC = Akaike information criterion; D = deviance; SE = standard error; MSE = mean square error.

Table 3

LM and LFM results in different scenarios when failure rate is 50%.

L n EstimateData without missingData with missing
LMLFM D LMLFM D
β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC
12100Mean0.4381.37610530.4451.40210452.910.4431.4074160.4531.4394121.10
SE0.1650.1380.1700.1410.2460.2040.2230.210
Bias-0.062-0.123-0.055-0.097-0.057-0.093-0.046-0.061
MSE0.0970.0840.1000.0820.1300.1000.1370.106
200Mean0.4621.39725620.4671.41425523.520.4671.40110470.4741.42510431.60
SE0.1130.0950.1150.0970.1650.1410.1670.139
Bias-0.038-0.103-0.032-0.085-0.033-0.098-0.025-0.074
MSE0.0500.0450.0510.0430.0650.0530.0650.051
400Mean0.4561.40060920.4621.41660766.220.4571.40825810.4651.42925732.63
SE0.0800.0660.0800.0660.1130.0940.1150.095
Bias-0.044-0.100-0.038-0.084-0.043-0.092-0.035-0.071
MSE0.0220.0270.0220.0240.0290.0320.0290.029

24100Mean0.4361.3742130.4731.485200649.950.4381.4038260.5061.54175135.88
SE0.1170.0980.1300.1010.1730.1450.2120.179
Bias-0.064-0.126-0.027-0.015-0.062-0.0970.0060.041
MSE0.0960.0880.1120.0840.1290.1010.2000.192
200Mean0.4621.39651210.4961.500492397.540.4651.40020920.5431.514194071.14
SE0.0800.0670.0880.0700.1170.0980.1390.118
Bias-0.038-0.104-0.0030.000-0.034-0.1000.0430.014
MSE0.0500.0450.0570.0440.0650.0520.0920.080
400Mean0.4561.400121790.4901.50011794191.850.4571.40951550.5221.5544848145.23
SE0.0650.0470.0610.0510.0800.0660.0940.078
Bias-0.044-0.100-0.0100.000-0.043-0.0910.0220.054
MSE0.0220.0260.0240.0210.0280.0320.0410.049

40100Mean0.4381.37730860.4861.5202880124.040.4381.40312110.5351.568104596.44
SE0.0960.0800.1100.0930.1430.1350.1850.158
Bias-0.062-0.123-0.0140.020-0.062-0.0970.0350.068
MSE0.0950.0830.1210.0950.1280.1010.2370.250
200Mean0.4521.40775200.5001.5257107245.490.4551.41331000.5521.5612762195.96
SE0.0770.0550.0740.0240.0960.0800.1200.103
Bias-0.048-0.0920.0000.025-0.045-0.0870.0520.061
MSE0.0430.0400.0510.0450.0530.0530.0860.128
400Mean0.4571.402179120.5031.53917089488.900.4581.40951500.5091.5584841143.24
SE0.0460.0380.0510.0400.0660.0550.0450.038
Bias-0.043-0.0980.0030.039-0.042-0.0910.0090.058
MSE0.0220.0260.0250.0250.0290.0310.0490.028

L = landmarks; n = sample size; LM = landmark model; LFM = landmark frailty model; AIC = Akaike information criterion; D = deviance; SE = standard error; MSE = mean square error.

Table 4

LM and LFM results in different scenarios when failure rate is 100%.

L n EstimateData without missingData with missing
LMLFM D LMLFM D
β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC β 1 = 0.5 β 2 = 1.5AIC
12100Mean0.3901.19717850.3951.21617763.390.3831.2057070.3921.2327031.36
SE0.0990.0830.1020.0850.1470.1230.1500.120
Bias-0.110-0.303-0.105-0.284-0.117-0.295-0.108-0.268
MSE0.0450.0350.0460.0340.0640.0460.0640.045
200Mean0.4741.39451470.4801.41151355.670.4801.39021140.4881.42121092.19
SE0.0810.0660.0800.0680.1150.0960.1180.099
Bias-0.026-0.106-0.020-0.089-0.020-0.110-0.012-0.079
MSE0.0230.0270.0230.0250.0290.0310.0290.028
400Mean0.4641.393121760.4681.4081215511.020.4651.39751560.4721.42051454.88
SE0.0550.0460.0570.0470.0800.0670.0820.068
Bias-0.036-0.107-0.032-0.091-0.036-0.103-0.028-0.080
MSE0.0120.0180.0110.0150.0150.0210.0150.017

24100Mean0.3881.19535670.4101.275345553.910.3821.20414080.4321.382132437.27
SE0.070.0580.0770.0650.1040.0870.1240.105
Bias-0.112-0.305-0.090-0.225-0.117-0.295-0.067-0.118
MSE0.0450.0390.0540.0360.0620.0460.0880.088
200Mean0.4751.395102970.5001.47810071107.830.4801.39842280.5461.477405674.70
SE0.0560.0480.0610.0510.0820.0680.0950.080
Bias-0.025-0.1050.000-0.022-0.020-0.1020.046-0.023
MSE0.0220.0280.0270.0250.0290.0310.0440.043
400Mean0.4641.391243540.4861.46523904215.030.4641.397103130.5141.5379964154.33
SE0.0390.0320.0420.0350.0560.0470.0650.054
Bias-0.036-0.109-0.014-0.035-0.036-0.1030.0140.037
MSE0.0120.0180.0120.0110.0150.0210.0220.020

40100Mean0.3881.19652390.4191.3115006132.350.3821.20520650.4581.4721874101.58
SE0.0570.0480.0650.0560.0850.0720.1090.094
Bias-0.112-0.304-0.080-0.189-0.118-0.295-0.042-0.027
MSE0.0450.0360.0580.0440.0640.0470.1120.133
200Mean0.4761.396151410.5221.52114643276.260.4821.40162110.5811.5755799214.15
SE0.0460.0390.0520.0440.0670.0560.0830.071
Bias-0.023-0.1030.0220.021-0.018-0.0990.0810.075
MSE0.0230.0270.0300.0280.0290.030.0580.077
400Mean0.4641.393358260.5011.50734828555.800.4651.397151690.5411.55714730430.87
SE0.0320.0270.0360.0300.0460.0380.0360.068
Bias-0.036-0.1070.0010.007-0.035-0.1030.0410.057
MSE0.0120.0180.0130.0110.0140.0200.0280.043

L = landmarks; n = sample size; LM = landmark model; LFM = landmark frailty model; AIC = Akaike information criterion; D = deviance; SE = standard error; MSE = mean square error.

3.1. Landmark Models vs. Simple Cox Model

Both LFM and LM had a better relative performance in comparison to the simple Cox model. As can be seen in all scenarios, bias and MSE of bias were lower in LMs; however, this difference decreased as the sample size increased from 100 to 200-400. Nevertheless, bias did not have basic changes in failure rate.

3.2. Comparing LFM vs. LM

The performance of the models was evaluated based on their ability to estimate the true value of the parameters and their ability to classify and predict the actual survival.

3.2.1. Ability to Estimate the True Value of the Parameters

Mean of parameter estimation and its SE, bias, and MSE for failure rate (30%) are shown in Table 2. In total, bias and MSE were lower in LFM in comparison with LM. According to deviance and AIC indices, using 12 landmarks and sample size of 100, there was no statistically significant difference between LFM and LM in both data sets. Deviance was 1.76 in complete data and 1.11 in incomplete data. Deviance index showed that there was a statistically significant difference between the two models at the sample size of 200 and 400 in the complete data. At the sample size of 200, AICs were 1515 and 3636 in LFM and LM, respectively, while at the sample of 400 they were 1521 and 3648. In these cases, bias and MSE of bias for the two parameters were slightly lower in LFM. In the incomplete data, according to deviance index, there was no significant difference between the two models in all sample sizes, while in the complete data (200-400) statistically significant difference was observed (in both cases, deviance was greater than mixture chi-square and AICs were lower in LFM). In all scenarios with 24 and 40 landmarks, based on deviance and AIC indices, LFM fitted better in comparison to LM. In these cases, the mean estimation of two parameters in the LFM was closer to the true value. This result was more pronounced in the continuous variable. According to the results of failure rate of 50% and 100% summarized in Tables 3 and 4, the superiority of LFM over LM was higher, especially in the model with 12 landmarks.

3.2.2. Ability to Classify and Predict the Actual Survival

We used CDI to assess the discrimination ability of the two models that were run with fixed sample size and failure rate in 3 different landmarks (6, 12, and 24), where CDI value is greater than 0.5, indicating that the model had discrimination ability. As illustrated in Figure 1, LFM had better performance. This advantage was more evident by increasing the number of landmarks from 6 to 12-24. On the other hand, the more area under the curve indicated the more accurate model. DPE and DRPE for calibration ability were plotted in Figures 2 and 3. The error rates and relative error rates in LFM were much lower than those in LM, and this became more prominent by increasing landmark numbers.
Figure 1

Simulated C dynamic index for landmark model (LM) and landmark frailty model (LFM): landmarks = 6, 12, and 24; sample size = 200; and failure rate = 50%. The higher values of C index indicate more discrimination ability.

Figure 2

Simulated dynamic prediction error for landmark model (LM) and landmark frailty model (LFM): landmarks = 6, 12, and 24; sample size = 200; and failure rate = 50%. The lower values of prediction error indicate more calibration.

Figure 3

Simulated dynamic relative prediction error for landmark model (LM) and landmark frailty model (LFM): landmarks = 6, 12, and 24; sample size = 200; and failure rate = 50%. The figures show which model has been able to reduce the error more.

4. Results of Real Data

Results of real data are summarized in Table 5 and Figure 4. The adjusted hazard ratio of variables and its confidence interval (CI) are provided. Furthermore, AIC and deviance index were extracted to assess the models. While in both LFM and LM the SBP was highly significant, it had no significant impact on cardiovascular events in simple Cox (p value = 0.258). However, LFM fitted better since it had an AIC equal to 5437 while it was 6385 in LM. Also, the deviance between two models was 559.7 (p < 0.001). After adjusting the treatment effect as well as baseline risk factor effect (Figure 4), it was shown that while SBP was decreasing over time, the hazard ratio (HR) was decreasing in line with SBP in both models. However, it is noteworthy that this reduction was more in the LFM. On the contrary, HR was constant over time in the Cox model. As the blood pressure decreases, the 3-year survival prediction increases. LFM predicts higher survival than LM and the simple Cox model (Figure 5).
Table 5

Static and dynamic effect of SBP on cardiovascular event.

VariablesSimple CoxLMLFM
HR p valueHR p valueHR p value
Age at enrolment day, yr1.051<0.0011.018<0.0011.022<0.001
Gender (female)0.691<0.0011.0190.1640.8430.291
TCH (mmol/L)1.001<0.0011.0010.1631.0000.582
HDL-C (mmol/L)0.986<0.0010.9910.0010.9870.023
Current smoker1.820<0.0011.1780.0401.2710.045
Treatment, intensive0.720<0.0010.482<0.0010.475<0.001
SBP (mmHg)1.0010.2581.083<0.0011.109<0.001
SBPtime0.996<0.0010.839<0.001
SBPtime20.966<0.0010.895<0.001

LM = landmark model; LFM = landmark frailty model; HR = hazard ratio; TCH = total cholesterol; HDL-C = high-density lipoprotein; SBP = systolic blood pressure.

Figure 4

(a) Systolic blood pressure (SBP) in the two treatment groups over time. Target of SBP in the intensive group and the standard group was less than 120 mmHg and 140 mmHg, respectively. (b) Hazard ratio prediction of systolic blood pressure over time adjusted for treatment effect and other covariates. It is shown that as blood pressure decreases over time due to treatment effect, the risk also decreases. This reduction is more in LFM. Also, the hazard ratio in the simple Cox model (included just baseline SBP) is fixed over time. AIC = 5437 and 6385 in LFM and LM, respectively, also deviance = 559.7 (p < 0.001).

Figure 5

Dynamic prediction of survival within window (w = 3) by adjusting the covariates in LFM, LM, and simple Cox model.

5. Discussion

5.1. Discussion of Simulation Data

DPM includes time-dependent marker information during follow-up in order to improve personal survival prediction probabilities. At any follow-up, time-updated marker value can be used to generate a dynamic prediction [10-12]. These models are essential to identify high-risk individuals and timely clinical decision-making. Recently, LM as DPM was extensively investigated by researchers [9, 24]. Some of them used LM in different aspects of survival data such as competing risk and cure data [14, 20]. However, they paid little attention to individuals' frailty and regularity of visits as well as correlation between different landmarks. On the other hand, LM can be affected by the way how landmarks are selected and number of landmarks. Ignoring these issues might lead to an estimation error. Hence, we proposed a modified LM which used the frail parameter as LFM. Indeed, in LM, individuals who experienced the intended event or being censored at a defined landmark time are considered in data analysis. Frailty plays a critical role in those who are retained and repeated dynamically in sequential landmarks due to their low frailty. However, in our proposed model, considering the frailty of patients was included in the analysis; hence, we were able to overcome the abovementioned problems. Simulations showed that both LFM and LM had a relative advantage over the simple Cox model. This was confirmed by various criteria such as bias and MSE. Bias and MSE in dichotomous and continuous variables were higher in the simple Cox model, which is in line with other studies [20, 31]. In the simple Cox model, as the sample size increased, the estimation error slightly decreased and the estimates were closer to the true values. But the simple Cox model was still behind the LMs. LFM and LM were compared regarding different sample sizes, different number of landmarks, different failure rate, and diverse data structure. Generally, the superiority of the LFM over LM was confirmed in the present study. This conformation was very clear in a large sample size and higher landmark number in both complete and incomplete data. To the best of our knowledge, this is the first study to investigate the effect of number of landmarks on accuracy of the results. Wright et al. [31] performed LM with 20 landmarks, and others have empirically found that 20 to 100 landmarks are appropriate [24]. However, in the case of large sample size, data become too large, and it took too much time to run the programs. There was no significant difference between LM and LFM in small sample size and low number of landmarks, which was the result of checking the deviance and AIC indices between the two models. Both models performed better by increasing failure rate. Although no statistical comparison was made, in each case, LFM was more appropriately fitted with fewer estimation errors. In most times, the discrimination ability of LFM was more than LM since DCI was more than 70% in LFM while this index was lower in LM. The difference between DCIs increased by the implementation of increased number of landmarks. Also, evaluation of collaboration ability (DPE and DRPE) showed that LMF had a better performance than LM.

5.2. Discussion of SPRINT Data

Hypertension is not only recognized as a major cardiovascular risk factor but also has a significant impact on the occurrence of events followed by therapeutic interventions [32]. In this study, we showed that considering hypertension as a dynamic risk factor had a basic role for obtaining real estimation of successful treatment in cardiovascular diseases. Hence, it should be considered dynamically during the course of treatment and not only at the time of admission as a primary measure that it was emphasized by other similar studies [9, 33]. SPRINT data confirmed the simulation results, which contained repeated measurements of SBP as a single longitudinal biomarker. As mentioned in the previous section, more than one biomarker could be used in LMs. By only using the baseline blood pressure data (simple Cox model), the role of SBP was hindered due to dominancy of treatment effect; hence, it was not recognized as a cardiovascular risk factor. This result is in line with the results of our previous study [22] and the same result was obtained from the study carried out by Group S.R. [29]. In both landmark models, as SBP decreased after treatment, the risky effect of SBP was also reduced. While HR in the simple Cox model is close to 1 and constant over time, HR in the two landmark models was close to 3 at the beginning of the study and then it relatively decreased by decreasing blood pressure over time, although the intensity of the significant reduction was higher in our proposed model. On the other hand, in the simple Cox model, the effect of intensive treatment was 38% (1-1/0.720) in comparison with standard treatment, while in LFM and LM it was 110% and 107%, respectively. This means that the protective effect of intensive treatment is highly exhibited in our model in comparison with simple Cox and LM. Thereby, predictability of the 3-year dynamic survival is higher in LFM. Other studies that worked on blood pressure have considered landmarks separately while we used a model that landmarks were considered continuously [21, 33, 34]. This study showed that landmark models can be used to help clinicians to make better decision for diagnosis and treatment. Landmark models, especially our proposed model, are useful for risk assessment, when the data is not complete or regular, similar to our data.

6. Conclusion

In this study, we provided a modified LM, which considered the frailty of the patients as well as the correlation between the landmarks. Our approach can be fitted better in the sense that it has a better GOF, improved real data analysis, and more optimized cardiovascular risk assessment.
  29 in total

Review 1.  Statistical evaluation of prognostic versus diagnostic models: beyond the ROC curve.

Authors:  Nancy R Cook
Journal:  Clin Chem       Date:  2007-11-16       Impact factor: 8.327

Review 2.  Landmark analysis at the 25-year landmark point.

Authors:  Urania Dafni
Journal:  Circ Cardiovasc Qual Outcomes       Date:  2011-05

3.  Landmark cure rate models with time-dependent covariates.

Authors:  Haolun Shi; Guosheng Yin
Journal:  Stat Methods Med Res       Date:  2017-06-19       Impact factor: 3.021

4.  Individual dynamic predictions using landmarking and joint modelling: Validation of estimators and robustness assessment.

Authors:  Loïc Ferrer; Hein Putter; Cécile Proust-Lima
Journal:  Stat Methods Med Res       Date:  2018-11-22       Impact factor: 3.021

5.  Effect of Chronic Kidney Disease on Cardiovascular Events: An Epidemiological Aspect from SPRINT Trial.

Authors:  Armin Attar; Mehrab Sayadi
Journal:  Iran J Kidney Dis       Date:  2019-09       Impact factor: 0.892

Review 6.  Prediction models for the risk of cardiovascular disease in patients with type 2 diabetes: a systematic review.

Authors:  S van Dieren; J W J Beulens; A P Kengne; L M Peelen; G E H M Rutten; M Woodward; Y T van der Schouw; K G M Moons
Journal:  Heart       Date:  2011-12-18       Impact factor: 5.994

Review 7.  Reporting methods in studies developing prognostic models in cancer: a review.

Authors:  Susan Mallett; Patrick Royston; Susan Dutton; Rachel Waters; Douglas G Altman
Journal:  BMC Med       Date:  2010-03-30       Impact factor: 8.775

8.  Effect of intensive blood pressure lowering on cardiovascular outcomes based on cardiovascular risk: A secondary analysis of the SPRINT trial.

Authors:  Armin Attar; Mehrab Sayadi; Mansoor Jannati
Journal:  Eur J Prev Cardiol       Date:  2018-09-26       Impact factor: 7.804

9.  Dynamic prediction of childhood high blood pressure in a population-based birth cohort: a model development study.

Authors:  Marleen Hamoen; Yvonne Vergouwe; Alet H Wijga; Martijn W Heymans; Vincent W V Jaddoe; Jos W R Twisk; Hein Raat; Marlou L A de Kroon
Journal:  BMJ Open       Date:  2018-11-21       Impact factor: 2.692

10.  A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

Authors:  Narges Roustaei; Seyyed Mohammad Taghi Ayatollahi; Najaf Zare
Journal:  Biomed Res Int       Date:  2018-01-09       Impact factor: 3.411

View more
  1 in total

1.  J-shaped relationship between cardiovascular risk and efficacy of intensive blood pressure reduction: A post-hoc analysis of the SPRINT trial.

Authors:  Armin Attar; Fatemeh Nouri; Roham Borazjani; Mehrab Sayadi
Journal:  PLoS One       Date:  2020-10-01       Impact factor: 3.240

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.