Literature DB >> 32160241

Under his thumb the effect of president Donald Trump's Twitter messages on the US stock market.

Heleen Brans1, Bert Scholtens1,2.   

Abstract

Does president Trump's use of Twitter affect financial markets? The president frequently mentions companies in his tweets and, as such, tries to gain leverage over their behavior. We analyze the effect of president Trump's Twitter messages that specifically mention a company name on its stock market returns. We find that tweets from the president which reveal strong negative sentiment are followed by reduced market value of the company mentioned, whereas supportive tweets do not render a significant effect. Our methodology does not allow us to conclude about the exact mechanism behind these findings and can only be used to investigate short-term effects.

Entities:  

Year:  2020        PMID: 32160241      PMCID: PMC7065837          DOI: 10.1371/journal.pone.0229931

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


“My daughter Ivanka has been treated so unfairly by @Nordstrom. She is a great person—always pushing me to do the right thing! Terrible!” @realDonaldTrump.

Introduction

This is a tweet from the Twitter account of the president of the United States of America, Donald J. Trump. President Trump won the elections of November 8, 2016 as the Republican candidate, and became the 45th president of the USA. The US president can have a significant influence on the American economy [1,2]. To exert this influence, one of the means is communicating with the public. In this respect, president Trump is the second US president to use Twitter to disseminate his thoughts. Although president Barack Obama also used Twitter, president Trump is the first president to extensively communicate with the public in a personal and informal manner using social media. President Trump has outspoken opinions that often lead to extensive coverage in the media. This study investigates whether and how president Trump’s tweets influence the stock market returns of mentioned companies. As such, we study the short-term impact of this method of communication on the value of these firms. President Trump is one of the most influential people on Twitter, with more than 58 million followers and some 40,000 tweets. @POTUS is the official presidential Twitter account. It was previously used by president Obama and is now reserved for the exclusive use of president Trump. But the president is much more active on his own personal account, @realDonaldTrump. Therefore, our study considers only the Twitter messages posted on this account. Compared to president Obama, president Trump names publicly traded companies regularly. The @BarackObama account has no tweets naming publicly traded companies, and the @POTUS44 account shows only one tweet mentioning a publicly traded company (Lehman Brothers) during president Obama’s presidential term. Does president Trump move the markets with his tweets? To examine this question, we use an event study. A key assumption in this type of study is that the event is unexpected. We argue that this is the case with president Trump’s tweets, as they relate to the president’s mood and feelings about companies, which are difficult to predict. Another issue with event studies is that the information is available to market participants. As tweets can be freely accessed and, assuming market analysts monitor the Twitter account of president Trump and his comments on publicly traded companies, investors may react to the tweets as if they were public news event releases [3]. We investigate if the presidential tweets affect stock market returns in the first two years of his presidency and whether the sentiment of the tweet makes a difference.

News, tweets, and markets

News moves stock prices [4]. News reaches the public via channels such as newspapers, television, social media, and official statements. These media channels have been studied extensively [5-8]. Social media enable president Trump to share information with the public real-time. Twitter is a social medium widely used to broadcast and share information about activities, status, and opinions. Investors show interest in Twitter too. The Securities and Exchange Commission (SEC) approved dissemination of information by companies via Twitter in 2013. Before this approval, Twitter was not considered a legitimate outlet for communication; since it violated Regulation Fair Disclosures, which relates to regulations that seek to eliminate selective disclosure. The regulation makes sure that all investors have access to the same information and no selective groups are favored. Due to the enormous amount of potentially important information that can be shared on Twitter, algorithms were developed after Twitter became a valid public disclosure source. These algorithms can detect tweets that are of interest to investors. Various studies scrutinize how social media relate to or predict stock market reactions. One study shows that frequent occurrence of financial terms in Twitter tweets is a significant predictor of daily market returns [9]. Zheludev et al. [10] find that up to 10% of all Twitter messages contain lead-time information about financial data. Further, financial information on Twitter affects the stock market [11], supporting the assumption that president Trump’s tweets might influence stock market returns too. Others investigate the use of social media via “sentiment analysis”, which is text analysis that systematically identifies and extracts subjective information from sources material, and argue that positively and negatively loaded tweets contain significant forecast information about the stock market index [10]. President Trump is very active on Twitter, commenting daily on recent news and events. His tweets are an important and novel source of market information to investors, since he has insider information on US government policy. The president provides high-value political information, since the markets can incorporate his tweets in the context of policy views. Another reason president Trump’s tweets are interesting is that they can amount to “free advertising” for a company to a broad audience. When a company receives attention from the president, this might encourage political supporters to buy its products. As the tweets reflect the feelings of president Trump, they may not actually convey any novel information about the firm. Therefore, the value-relevance of the tweets is not self-evident and needs to be tested in an explicit way. The stock market response helps detect whether they are financially relevant indeed. Some studies have investigated president Trump’s tweets. Afanasyev et al. [12] do so regarding the impact on the US dollar exchange rate of the Russian ruble. Other studies more closely align with our research question, namely the relationship with stock market performance. A working paper by Born et al. [13] studies 15 company specific tweets regarding ten firms in the period of his election and his swearing in ceremony. They find that both positive and negative tweets elicited abnormal returns on the event date. Juma’h and Alnsour [14] study tweets from president Trump during his campaign period and his first year of presidency. They have 58 tweets with specific company names. They combine this with tweets about immigration, employment, tax reform, finance, and the economy. They show that, on average, there are no significant effects of the tweets on market indices or share prices. Ge et al. [15] study the president’s tweets in his first year of presidency and have 48 company specific tweets. They find that these tweets slightly move company stock prices, especially those before the presidential inauguration (January 20, 2017). Further, although the response to negative tweets is larger than to positive ones, the difference between the two is not statistically significant. We aim to complement this literature by substantially expanding the sample and by specifically looking into the sentiment of the tweets to find out whether positively toned tweets have less or more impact than negatively toned ones. To this extent, we use automated sentiment analysis and refrain from manual coding as in Ge et al. [15]. Our first hypothesis tests whether the president’s posting of tweets that include company names has a significant influence on the stock market return of these firms. This is to find out if financial investors appreciate the information from the president’s tweets as value-relevant indeed. In this respect, we want to find out if our substantially larger sample and more extended research period yield similar results as previous studies did. Then, we investigate whether the sentiment of the tweet matters. This is because, according to the finance literature, investors have more pronounced reactions to negative than to positive news [16]. Hence, our second hypothesis tests whether the stock market return after negatively toned presidential tweets is significantly negative and more pronounced than the return after positive tweets.

Material and methods

An event study measures the impact of a specific event on the value of a firm using financial market data [17]. The usefulness of such a study results from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in stock prices [17]. Using stock prices, it is possible to measure the economic impact of an event over a short time period [18]. The general flow of analysis in an event study is as follows (based on [17], pages 14–15): “… [first] define the event of interest and identify the period over which the security prices of the firms involved will be examined—the event window. […] The period of interest includes the day of the announcement of the event and the day after. This captures the price effects of announcements, which occur after the market closes on the announcement day. […] After identifying the event, it is necessary to determine the selection criteria for the inclusion of a firm in the event study. […] The appraisal of the event’s impact requires a measure of the abnormal return. […] The abnormal return is the actual ex post return of the security over the event window minus the normal return of the firm over the event window. The normal return is defined as the expected return without conditioning on the event taking place…”. As such, the event study methodology is well suited to inform about the value relevance of events, but does not allow determining the mechanisms that are behind any market response to news. This specific event study examines tweets posted by president Trump that mention a publicly traded company. Twitter data are collected from the Twitter account of president Trump (@realDonaldTrump) over the period November 8, 2016 (Election Day) to November 8, 2018 (two years after Election Day). In this two-year period, the president posted over 5,600 Tweets. All tweets were retrieved from TrumpTwitterarchive.com/archive, this database contains all Twitter posts of President Trump. This archive contains tweets that are deleted from Trump’s Twitter account as well. But only deleted tweets that were online for at least 24 hours are considered in our study. This is because we need to take into account a breakpoint in the dataset: Since 27/1/2017, the archive switched to monitor the twitter messages in real time whereas before this day the tweets were collected on a daily basis (at least once every 24 hours). This means that tweets that have been online for less than 24 hours before 27/1/2017 are missing in the database and cannot be included in the study. Therefore, we decided to remove all tweets that are deleted within this 24 hour time frame to remain consistent regarding the sampling process. To avoid ‘contamination’, we removed re-tweets, tweets pertaining to non-publicly listed related companies, non-specific tweets, and tweets within the event window about any other company. We also removed companies involved in merger and acquisition activity, who did a stock split, gave a profit warning, or saw a change in their top management team. The reason for doing so is that the literature usually finds a stock market response after this type of news [17]. If the president tweets about this news and we would keep this tweet in our sample, it is not possible to determine whether the market response results from the event as such or from the president’s tweet. As a result, in the end, a sample of 100 tweets remains. For event studies, this is substantial and yields powerful test results [17]. However, the number is too small to allow regressing abnormal returns of tweets on presidential, firm or investor properties. This would also require a theory regarding the impact of the president’s tweets on firm or investor performance, which we lack. We retrieved daily return indexes of individual companies from DataStream, a database of global financial and macroeconomic data. We also retrieved Standard & Poor’s 500 (S&P 500) daily return indexes for each event. The S&P 500 is a stock market index based on the market capitalization of 500 large companies listed on the NYSE, NASDAQ or CBOE. Due to strict requirements, the S&P 500 index is one of the most accurate indicators of the economy and stock market of the United States. Therefore, we use the S&P500 as the market index (i.e., benchmark) for our event study. To arrive at the normal (expected) stock market return, we use an estimation window of 250 days (ranging from [–251, –1]) to estimate the expected returns in the event window [17,18]. This window allows for appropriate estimation of expected returns [19]. Day 0 is the event day, and the event window includes 2 days (day 0 and day 1). We use this short event window to maximize the capturing-effect of the event on stock prices, while minimizing influence from other factors [17,20]. With a longer event window, the likelihood of a substantial impact on market returns from other (confounding) factors increases considerably. The event window starts at the announcement day, because pre-event leakage is unlikely, as the tweets contain the president’s personal opinions. Twitter operates around the clock, and therefore there are many tweets posted outside stock exchange trading hours. To ensure that the market response to a tweet is considered in relation to the closest trading time, they are separated into two groups: tweets inside trading hours and tweets outside trading hours. The first group relates to tweets occurring between 9:30 am and 4:00 pm EST. For these tweets, we use market closing prices. We acknowledge that the lack of access to high-frequency market information is a limitation of our study. For the group occurring outside trading hours, we use next-day market opening prices. S1 File (column 6), specifies all tweets as occurring either inside trading hours (‘closing’) or outside trading hours (‘opening’). When president Trump posts a tweet in the weekend or on a holiday, it is assigned to the next trading day, since this is when investors are able to react to the tweet. Sometimes a tweet consists of multiple tweets, due to the limited number of words that Twitter allows. These tweets can be recognized as ending with a series of dots, followed by another message within a few minutes starting with dots. We consider such events as one single event. When there are multiple tweets concerning one company assigned the same opening or closing rate, this is considered one event as well. The market and risk adjusted returns model (market model) is the most frequently used model in event studies. This model relates the return of a security to the return of the market portfolio (in our case the S&P 500). The market model assumes a constant linear relation between the security and the S&P 500 [17]. In our event study, the securities of the companies differ between events (see S1 File) because the president twitters about different firms. The returns of the securities are calculated by dividing the logarithm of the daily price of the stock ln(P) by the logarithm of the price on the previous day ln(P). To determine the impact of the event, we calculate abnormal returns: where AR is abnormal return, R is return of the market index, and R actual return of event i on day t. Alpha and beta are ordinary least squares estimates of the market model. The beta coefficient shows the sensitivity between the stock market as a whole and stock i, whereas the alpha coefficient shows the risk involved with stock i. Both alpha and beta are calculated over the estimation period. From the abnormal returns, the average abnormal return (AAR) can be calculated as follows: where N is sample size, in this case the 100 Twitter messages. Aggregation of the abnormal returns yields the cumulative abnormal return (CAR): where T1 is, in this case 0, and T2 is 1, since the event window ranges from 0 to 1. From the cumulative abnormal returns, the cumulative average abnormal return (CAAR) can be calculated: We perform a parametric test to determine the significance of the results. The parametric test assumes a normally distributed sample [19]. We use the crude dependence adjustment (CDA) test, since it compensates for potential dependence of returns across events (to account for the fact that president Trump’s Twitter messages might influence one another). The test statistic is given as (following [19], 3081): where is the equal weighted portfolio mean return on day t, with calculated as: where is the abnormal return. The standard deviation is given by: Where is: Finally, to calculate tCDA for CAAR, the following formula is used: The descriptive statistics in Table 1 reveal that there is substantial skewness in the alfa’s and that their distribution is too peaked. For this reason, we perform non-parametric tests next to the parametric ones. More specifically, we use the generalized sign test where the null hypothesis assumes no (cumulative) abnormal returns [21]. For this test, the proportion of securities that have a positive (cumulative) abnormal return under the null hypothesis of no (cumulative) abnormal return is determined. The fraction of a given sign of (cumulative) abnormal returns expected in the estimation period is [19]: Here, S is determined by the number of positive (cumulative) abnormal returns in the estimation window; M is the number of non-missing estimation period returns for security i. The generalized sign test statistic (Z) is: where w is number of stocks in the event window for which the (cumulative) abnormal return is positive [19]. The main advantage of this test is that skewness in returns is taken into account.
Table 1

Descriptive statistics of abnormal returns in the estimation window.

AlphaBetaAAR
Mean0.00030.9790.0000
Median0.00021.0220.0001
Standard Deviation0.00090.4470.0017
Kurtosis2.27300.093-0.0004
Skewness1.1890-0.270-0.0355
Minimum-0.0014-0.088-.00430
Maximum0.00392.0090.0048

Beta shows the relationship between market and stock returns. Alpha shows the risk involved with the stock. The AAR over the estimation window is shown in the last column.

We divide the sample into subsamples to perform an analysis regarding the sentiment of the tweets. The particular subgroups are created using SentiStrength, which extracts sentiment strength from informal English text. SentiStrength is a highly accurate sentiment analysis tool specified for short social web texts [22,23]. It is produced as a part of the CyberEmotions project which is supported by the EU FP7; This tool is able to detect social media grammar and misspellings (such as HAPPPPY, omg, :-, b-day, and what’s up) [23]. SentiStrength gives all tweets a score from -5 (very strong negative emotion or energy) to +5 (very strong positive emotion or energy). SentiStrength has a word list with positive and negative sentiment terms and their strength; some examples: dislike (-3), hate (-4), excruciating (-5), lover (4), coolest (3) and encourage (2). Moreover apostrophes are taken into account in this sentiment analysis as well; depending on the context of the tweet it is given a positive notation or a negative notation. Each tweet gets a positive sentiment score between 0 and 5 and a negative sentiment score between 0 and -5 (see S1 File for an example). These sentiment scores are combined by summing up the positive and negative scores to obtain a single sentiment polarity; in this study, the polarity ranges from -4 to +3 (S1 File). These SentiStrength codes are then recoded to -1 for any negative number, to +1 for any positive number, and to zero for neutral (S1 File). We end with 37 Twitter messages with a negative sentiment and 44 with a positive sentiment (leaving 19 neutrals). SentiStrength is tested for accuracy among the tweets and can predict positive emotion with 60.6% accuracy and negative emotion with 72.8% accuracy [22]. Therefore, we checked all tweets to detect whether tweets were wrongly assigned. As a result, tweet number 79 (see S1 File) was deleted from this analysis. We test for differences between the different groups with a two-sample t-test. To calculate the t-value, the following formula is used: In this formula, n and n are the number of events in each subgroup. s2 is determined via the following formula: The hypothesis used in the difference test is: H0: AAR1 = AAR2 versus H1: AAR1 ≠ AAR2. We make the following three comparisons: negative sentiment vs. positive sentiment, to see whether a positive tweet has a different AAR than a negative tweet; negative versus all tweets, to see if tweets with negative sentiment have a different effect compared to all tweets, and the same for positive tweets.

Presidential powers

Descriptive statistics of the sample returns are shown in Table 1. It appears that the beta is on average 0.979, which means that the stocks are only slightly less sensitive to the market (S&P500) than average. As such, this shows that our sample very well reflects the US stock market as a whole. Fig 1 shows AARs for all events in the two-day event window. The figure reveals that one event (#87) is a notable outlier, as it reflects a drop in stock price of almost 20%. The tweet associated with this event is the following: “Twitter ‘SHADOW BANNING’ prominent Republicans. Not good. We will look into this discriminatory and illegal practice at once! Many complaints”. Twitter was under fire in this period due to its fake-account purge. President Trump reacted to this news item by stating that it is not legal and will be investigated. Given its extreme magnitude, the event was removed from this sample; including it would have strengthened our results. No other outliers were detected.
Fig 1

Average abnormal stock market returns in percentages on days 0 and 1 for all events.

Beta shows the relationship between market and stock returns. Alpha shows the risk involved with the stock. The AAR over the estimation window is shown in the last column. Table 2 reports results from our estimations of the (cumulative) AARs in the event window. This table shows that AAR on the event day is 0.1%, which is not significant at any significance level according to both tests (i.e., parametric and non-parametric). On the first day after the event day, the AAR is -0.07%, which is not significant at any level either. Further, the CAAR of 0.5% in the event window [0,1] is not statistically significant. Therefore, the null hypothesis of no abnormal returns after a tweet from president Trump cannot be rejected. Thus, we conclude that on the day of the tweet about a company and on the first day thereafter, there is no statistically significant effect on the stock market value of the mentioned company. As such, it seems the president does not move the stock market with his tweets in a statistically significant way and the tweets are not economically meaningful. These results confirm those from Juma’h and Alnsour [14] and Ge et al. [15] for a much smaller sample. However, they contrast with those of Born et al. [13], who studied a sample of fifteen tweets between the presidential elections and the swearing in of president Trump.
Table 2

Estimation results and test statistics of (cumulative) AARs in relation to president Trump’s tweets with companies named.

DayAARp-value parametric testp-value non-parametric test
00.00100.53830.5636
1-0.00070.65440.5636
WindowCAAR
[0; 1]0.00510.50980.5402
To test our second hypothesis regarding the sentiment of the tweet, we investigate positive, negative, and neutral tweets. The hypothesis of no difference in AAR between subgroups on the event day is tested via a two-sided difference test. We specifically compare whether the difference between responses to negative and positive tweets is significantly different from zero. This is reported in Table 3. This table shows that the AARs for the two subgroups are significantly different from zero at the 10% level for both subgroups on the event day (day 0) according to the non-parametric test. The AARs for one day after the event day are statistically significant for the negative tweets only. Further, the difference in the AARs between the two subsamples is 0.7% and 1.0%, significant at the 10% level. Therefore, the hypothesis that the AARs of the two subgroups are not different from each other can be rejected. This contrasts with the results from Ge et al. [15], who do not find an effect. The difference might be related to the fact that we study more presidential tweets over a longer period, where market participants became to realize presidential tweets might be financially relevant for companies. However, such change in their views is hard to test.
Table 3

Comparing the response to tweets with negative and positive sentiment.

Day01
AAR negative tweets-0.0037-0.0071
p-value parametric test0.07810.0046
p-value non-parametric test0.09050.0905
AAR positive tweets0.00350.0028
p-value parametric test0.08300.1346
p-value non-parametric test0.05230.1502
Difference between positive and negative tweets0.00720.0098
p-value of difference0.08080.0931
We perform additional difference tests regarding the sentiment of the tweets by using the information from the SentiStrength scores. First, we compare positive tweets and all other tweets (i.e., neutral and negative). Here, the difference between the AARs is 0.25% and 0.35%, respectively, for day 0 and day 1 (S1 File). A one-sided t-test compares the groups. This test shows that the difference between the tweets which reveal positive sentiment tweets and all other tweets is not statistically significant. Further, we compare tweets which reveal negative sentiment tweets and all others. Here, there is a marginally significant underperformance of negative tweets on the event day, but not so on the first day thereafter (S1 File). In addition, we account for the strength of the opinion expressed in the tweets. We first compare the market response to very strong opinions (SentiStrength scores -4, -3, 3, and 4) with that to moderate to neutral opinions (SentiStrength scores -1, 0, and 1). Here, it shows (S1 File) that the investor response to the strongest opinions significantly deviates from that to neutral and moderate ones on the event day. We also investigate whether the response to strong positive opinions in relation to neutral ones differs from the response to strong negative ones. Here, we use scores in the (absolute value of) 2–4 range and compare with a SentiStrength score of 0. We observe (S1 File) that there is a significant difference on very strong negative tweets on day 0. Comparing the very strong positive tweets (2, 3, 4) to the very strong negative tweets (-2,-3,-4) gives a significant difference on day 0 as well. Thus, regarding the second hypothesis, we observe that accounting for the sentiment of the tweet informs our analysis. This especially is the case with tweets from the president which reveal strong negative sentiment.

Discussion

We study the impact of president Trump’s Twitter messages on the stock market. We investigate 100 of his tweets that include the name of a publicly listed company over the first two years of his presidency. We also carry out an analysis of the sentiment of the tweets by using textual analysis. Overall, the president’s tweets did not yield a significant response from the stock market. However, if we account for the sentiment of the tweets, we observe that especially tweets with a (strong) negative sentiment tweets render a significant negative response from the investment community in an economically meaningful way. This is in line with previous research [14,15] and confirms that investors are more sensitive to bad news than to good news [9,16]. We feel that our systematic sampling approach, the substantially larger number of events, the inclusion of non-parametric testing, and relying on textual analysis regarding the sentiment of the tweets, contributes to a better understanding of the economic impact of communications from president Trump. There are limitations to this event study. First, the tweets are selected manually to find tweets with company names mentioned. This could lead to exclusion and inclusion from incorrect events in the dataset. Further, this study uses daily data; for future research, high-frequency data may be studied, since social media, especially tweets, diffuse rapidly among investors. Future research could also focus on whether the effects of the president’s tweets become less strong over time as people become accustomed to his way of using Twitter. It would also be interesting to compare our results with another president or influencer. Our results show that president Trump’s tweets about companies have a negative impact on stock market returns when they are revealing strong negative sentiment. The firms themselves usually are not in a position to respond. (DOCX) Click here for additional data file. 30 Oct 2019 PONE-D-19-26269 Under His Thumb The Effect of President Donald Trump’s Twitter Messages on the US Stock Market PLOS ONE Dear Mr Scholtens, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. In particular, we would like you to address the issues raised by the two referees about the data processing, the size of the dataset and the limitations of this study. The two referees also raised concerns about the terminology used to refer the temperament of President Trump and also asked for a more detailed explanation and justification of the methodology. We ask you to make sure that the methods are described in sufficient detail to enable reproducibility and replicability. We would appreciate receiving your revised manuscript by Dec 14 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Alexandre Bovet, Ph.D. Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 3. Please amend your manuscript to include your abstract after the title page. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors investigate the effects of President Donald Trump's Twitter use on financial returns of companies he mentions at the daily timescale using an event study methodology. The analysis is conducted appropriately, however some clarifications are needed in order for the manuscript to be publishable in my opinion. 1) The authors declare they eliminate from their analysis any tweet that was online for less than 24 hours. This seems to me an unnecessary restriction, since financial markets react on a much faster time scale nowadays and tweet deletion doesn't necessarily imply that it would have no effect. I invite the authors to expand on the justification for this choice of threshold; 2) The authors refer numerous times to the President's "temperamental nature". While it can be understood why they call it like that, it seems inappropriate to me on a scientific journal and would suggest they find an agreement with the editor on the choice of wording; 3) I would suggest the authors introduce the event study methodology and statistical testing with more detail, giving particular attention to the distinction between quantities calculated in the estimation period or in the observation period. 4) In the definition of the CDA test statistic, \\overline{u} and \\overline{u_it} appear in the formulas but are never defined. This makes the explanation particularly confusing. I strongly suggest that the authors revise the whole section and spend more words explaining the methods, since they cannot be easily understood from the text as they are written; 5) Reference to the original paper by Cowan (Cowan, A. R. (1992). Nonparametric event study tests. Review of Quantitative Finance and Accounting, 2(4), 343-358) should be there; 6) In the definition of the non-parametric test, no clear definition is given for quantities M_i and S_it. Also, there is some confusing phrasing regarding positive abnormal returns (line 224) and negative abnormal returns (line 227), which should be made more clear. 7) In Table 1, a p-value for the Jarque-Bera test would be easier to understand for readers who don't know the quantiles of the Chi Squared distribution by heart; 8) Figure 1 panels should be made of the same size and with same x axis limits in order to be easier to read and compare. Also, axis labels instead of plot titles would make the plot easier to understand at a glance. Reviewer #2: The paper presents an analysis of the effects of president Trump twitter messages on the stock market. The paper is rigorous in the data analysis and comply with the strict requirements of PLOS ONE with regards to data availability and statistical analysis. However, I have a number of complaints about the data used in the paper, the presentation and justification of the data analysis techniques employed and the strength and interpretation of the conclusions. I think these complaints should be acted upon to make the paper better. I detailed my comments in the following: 1) Data The dataset used in the analysis, in my view, is too restrictive. There is no really justification on the reasons it should be based just on the first two years of presidency of Trump. I can understand the reasons of the starting date, but there is no real reason to restrict it to just two years. In fact, it would have been much more useful to extend it to the present day (so one additional year) and then study if the activity of the president actually made the found effects on the market and perhaps showed some decay over time. The major problem is that the data set is too small, once irrelevant or questionable tweets are removed. The analysis is based on only 100 tweets! Not to mention that these 100 tweets then become only 81 once sentiment analysis is applied to them, (assuming the 19 neutral tweets did not find any use)! The set is too small, in my view, to support the analysis that followed as other factors happening at the same time could have produced the observed result. The use of more data would certainly make this possibility smaller. For the same reasons I would also extend the event window from 2 to maybe 3 or 4 days. This would enable to account for holidays or other delaying factors on the effect the tweet could have on the market. Also, an analysis of the effects of the window size on the conclusions would also be quite interesting. Minor comment: - the distinction between "estimation window" and "event window" is not clear. - why did you not had access to more high-frequency information? One of the authors is from a school of management and so this should not have been so difficult. 2) Data analysis techniques While the formulation of the hypnotises and style of the analysis is very clear and presented with rigour, the motivations for the choice of some of the techniques has not been clearly justified. Some more explanation on the reasons you chose the "market and risk adjusted returns model" would have been useful. More importantly, why did you choose the "crude dependence adjustment model"? And could you provide some reference for it? Why did you use the Jarque-Bera statistics? Why not one of the many other statistics? The choice of some of these methods should be clearly justified so as not to suggest the choice could have been biased by the results. Minor comment: - what does the index t stands for in the formula at page 8 line 208? - why loosing the strength of the sentiment provided by SentiStrength and just accounting for its positive and negative nature (as reported in line 244, page 10)? It would have been quite interesting to relate the strength of the opinion to its effects. It could have provided also more interesting arguments for the analysis reported at the end of page 14. - in the discussion you mention an "analysis of the tone of the tweets by using textual analysis". If this is a reference to the use you did of the results provided by SentiStrength, I am afraid it is not that strong! 3) Interpretation of the results I fund quite strange that in a scientific paper the authors could make such "personal" and "opinionated" comments such as: "We assume that the temperamental nature of the president results in lack of predictability" (page 3, line 64) or again "temperamental nature" (page 7, line 163) or, finally, your mention of the possibility that someone with prior knowledge of the president's tweets my able able to profit from them (page 15, line 351). I think such comments are better left out from a scientific paper as to avoid possible attacks that would lower the scientific value of the conclusions. -- Minor point: page 4, line 106: "the may" -> "may" ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. 5 Dec 2019 Our response has also been uploaded. Format below is not as insightful as in the document uploaded. # Comment Response Reviewer 1 1 The authors investigate the effects of President Donald Trump's Twitter use on financial returns of companies he mentions at the daily timescale using an event study methodology. The analysis is conducted appropriately, however some clarifications are needed in order for the manuscript to be publishable in my opinion. * Thank you very much for taking the time to read our manuscript and provide suggestions. 2 The authors declare they eliminate from their analysis any tweet that was online for less than 24 hours. This seems to me an unnecessary restriction, since financial markets react on a much faster time scale nowadays and tweet deletion doesn't necessarily imply that it would have no effect. I invite the authors to expand on the justification for this choice of threshold; * Thanks for raising this issue. The decision to delete tweets that were online for less than 24 hours is based upon a limitation on the data archive. Unfortunately, TrumpTwitterarchive.com/archive has a breakpoint in their data series, from 27/1/2017 onwards the archive switched to monitoring the twitter tweets in real time. Before this date, the tweets were only collected daily (at least every 24 hours). Therefore, not all tweets that are deleted can be collected in the data archive. With certainty, it can be said that all the tweets that have been online for over 24 hours are included in the dataset and therefore the decision has been made to eliminate all tweets that are deleted within the time span of 24 hours to avoid a breakpoint in the dataset that is unrelated to the topic studied. {161-171} 3 The authors refer numerous times to the President's "temperamental nature". While it can be understood why they call it like that, it seems inappropriate to me on a scientific journal and would suggest they find an agreement with the editor on the choice of wording; * This point is well taken. We amend the choice of wording when we refer to the tweets of the US president. Reviewer 2 also raised this issue (see comment #19) {65-66, 196-197, 360} 4 I would suggest the authors introduce the event study methodology and statistical testing with more detail, giving particular attention to the distinction between quantities calculated in the estimation period or in the observation period. * We provide more detail about the methodology and the tests employed. We point out the importance of the estimation period for the calculations of the AARs in the event period. {143-156} 5 In the definition of the CDA test statistic, \\overline{u} and \\overline{u_it} appear in the formulas but are never defined. This makes the explanation particularly confusing. I strongly suggest that the authors revise the whole section and spend more words explaining the methods, since they cannot be easily understood from the text as they are written; * We provide more detail about this test statistic. {245-257} Thanks for suggesting, please see the changes in relation to the previous comment (#4). 6 Reference to the original paper by Cowan (Cowan, A. R. (1992). Nonparametric event study tests. Review of Quantitative Finance and Accounting, 2(4), 343-358) should be there; * Thanks for suggesting this; we include it in the list of references. {261, 432-433} 7 In the definition of the non-parametric test, no clear definition is given for quantities M_i and S_it. Also, there is some confusing phrasing regarding positive abnormal returns (line 224) and negative abnormal returns (line 227), which should be made more clear. * We amended the text; see also response regarding comment #5., We provide more detail. {245-257, 259-267} 8 In Table 1, a p-value for the Jarque-Bera test would be easier to understand for readers who don't know the quantiles of the Chi Squared distribution by heart. * We include the p-value. {307; last line in the table} 9 Figure 1 panels should be made of the same size and with same x axis limits in order to be easier to read and compare. Also, axis labels instead of plot titles would make the plot easier to understand at a glance. * Thanks for suggesting this; we changed the formatting and included the axis labels. {322-326} Reviewer 2 10 The paper presents an analysis of the effects of president Trump twitter messages on the stock market. The paper is rigorous in the data analysis and comply with the strict requirements of PLOS ONE with regards to data availability and statistical analysis. However, I have a number of complaints about the data used in the paper, the presentation and justification of the data analysis techniques employed and the strength and interpretation of the conclusions. I think these complaints should be acted upon to make the paper better. * We very much appreciate you took the time to read our manuscript and to provide comments. 11 The dataset used in the analysis, in my view, is too restrictive. There is no really justification on the reasons it should be based just on the first two years of presidency of Trump. I can understand the reasons of the starting date, but there is no real reason to restrict it to just two years. In fact, it would have been much more useful to extend it to the present day (so one additional year) and then study if the activity of the president actually made the found effects on the market and perhaps showed some decay over time. The major problem is that the data set is too small, once irrelevant or questionable tweets are removed. The analysis is based on only 100 tweets! Not to mention that these 100 tweets then become only 81 once sentiment analysis is applied to them, (assuming the 19 neutral tweets did not find any use)! * This is a comment that is highly relevant in the light of conducting an event study; The issue of how many events are ‘enough’ is heavily debated. There are a large number of event studies that rely on less than 10 events and event studies which have one event only. First, all tweets were checked for company names; this relates to about 7000 tweets per year. Companies are not mentioned that often and we selected all of them as described in the manuscript {158-180}. Compared to existing research, it shows that the related studies have much less tweets in the sample than we do, namely Born et al. – 15; Ge et al. – 48; Juma’h and Alnsour – 58. Second, to substantiate that our sample size is not overly restrictive, we refer to MacKinlay (1997; table 2, page 97) and the simulations by Brown and Warner (1985). This suggests that our sample size is not too small from an event study perspective. Last, an event study is not very suitable to investigate long-time effects of news. Most importantly, this is because of the continuous build-up of confounding events (news) that happens after the original event. 12 The set is too small, in my view, to support the analysis that followed as other factors happening at the same time could have produced the observed result. The use of more data would certainly make this possibility smaller. * See also our response to the previous comment. We control for confounding news regarding the companies as explained in the data section. {171-174} 13 For the same reasons I would also extend the event window from 2 to maybe 3 or 4 days. This would enable to account for holidays or other delaying factors on the effect the tweet could have on the market. Also, an analysis of the effects of the window size on the conclusions would also be quite interesting. * Thanks for raising this important issue. We account for holidays and weekends by using trading days throughout the analysis. We realize this might not have been very clear and now explain it in the text. {207-209} Further expanding the event window conflicts with the assumptions of relying on the market and risk adjusted returns model and would reduce the power of all test stats. The assumption is that market participants can respond quickly to news and that their combined efforts reflect in the stock prices (returns). Changing to a long event window reduces the power of the tests and opens up the impact of more and more confounding events (news). In addition, it would rely on the assumption that investors would start to respond to a tweet after some days. This is not in line with finance theory though. 14 the distinction between "estimation window" and "event window" is not clear. * Thank you for highlighting this; Reviewer 1 also raises the point (comment #4). We now understand the need to explain the research method and provide a more detailed explanation of the event study method and point out the differences between the two windows. {143-156} 15 Why did you not had access to more high-frequency information? One of the authors is from a school of management and so this should not have been so difficult. * Unfortunately, our university is not endowed well and is not willing to fund access to the databases that include HF information / tick data. 16 While the formulation of the hypnotises and style of the analysis is very clear and presented with rigour, the motivations for the choice of some of the techniques has not been clearly justified. Some more explanation on the reasons you chose the "market and risk adjusted returns model" would have been useful. More importantly, why did you choose the "crude dependence adjustment model"? And could you provide some reference for it? Why did you use the Jarque-Bera statistics? Why not one of the many other statistics? The choice of some of these methods should be clearly justified so as not to suggest the choice could have been biased by the results. * See also the response to comments #12-14. We provide more detail about the event study methodology and motivate the use of the market and risk adjusted model, CDA, and provide reference. {143-156, 241-271, 432-433} We use the JB to inform about the (non)normality of the distribution. We do perform both parametric and non-parametric testing to rule out bias. 17 what does the index t stands for in the formula at page 8 line 208? * The ‘t’ is for the day in the event window. We now also mention this in the accompanying text. {248} 18 Why loosing the strength of the sentiment provided by SentiStrength and just accounting for its positive and negative nature (as reported in line 244, page 10)? It would have been quite interesting to relate the strength of the opinion to its effects. It could have provided also more interesting arguments for the analysis reported at the end of page 14. * Thank you very much for suggesting this. It is very useful. We include an assessment of whether the strength of sentiment matters. We study whether (very) ‘strong’ expressions yield abnormal returns that significantly deviate from moderate/neutral. We study whether ‘strong’ positive yields a different response than ‘strong’ negative responses. {375-384; see also Supplementary Material D and E} 19 I fund quite strange that in a scientific paper the authors could make such "personal" and "opinionated" comments such as: "We assume that the temperamental nature of the president results in lack of predictability" (page 3, line 64) or again "temperamental nature" (page 7, line 163) or, finally, your mention of the possibility that someone with prior knowledge of the president's tweets my able able to profit from them (page 15, line 351). I think such comments are better left out from a scientific paper as to avoid possible attacks that would lower the scientific value of the conclusions. * Thanks for raising this issue (Reviewer 1 also has brought it up – comment #3). We realize that we should have toned down as it might distract from the argumentation. We amended the choice of words in the text fragments mentioned in this comment. {65-66, 196-197, 360} We deleted the remark about prior knowledge. {415-416} 20 page 4, line 106: "the may" -> "may" * ‘they’ {110} Submitted filename: PONE-D-19-26269 Rebuttal.docx Click here for additional data file. 27 Dec 2019 PONE-D-19-26269R1 Under His Thumb The Effect of President Donald Trump’s Twitter Messages on the US Stock Market PLOS ONE Dear Mr Scholtens, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers are mostly satisfied with your first revision but raised a few more points that need to be addressed before we can consider the manuscript for publication. In addition to the points raised by the reviewers, we also ask you to better describe how the sentiment analysis (SentiStrength) works and how could one use it the reproduce the results. We also ask you to clearly acknowledge the limitations of observational studies and causality analysis in the abstract and in the main text. A data availability statement also needs to be added in the manuscript (see https://journals.plos.org/plosone/s/data-availability). We would appreciate receiving your revised manuscript by Feb 10 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Alexandre Bovet, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I appreciate the replies and modifications the authors provided in response to the comments both by me and by the other reviewer, which I believe have made the manuscript more readable and easy to understand. I still have two comments which I think need to be addressed before the paper is sound for publication. 1) While I agree with the authors on the choice of only two days in their event window to avoid confounding events and I think 100 events are sufficient to draw some conclusions, I also think that showing some robustness to sample selection would reinforce the results, given the significance of the tests is not particularly strong. I think the authors didn't respond appropriately to the main issue presented by reviewer #2 in their comment #11 (whom I thank for raising the issue which I didn't notice at first), where it was asked a reason for the choice of limiting their sample to the first two years of presidency instead of considering tweets up to present day. Unless the authors have a sound justification for this choice, I would suggest they perform their analysis on the updated dataset. 2) At line 250 (page 10) in the revised manuscript I believe w should be the number of stocks with positive CAR, not negative, in order to be consistent with the statements above. Reviewer #2: This is the second version of the paper and I see that all my comments have been considered and taken action upon, as far as it waspossible, given the available data. I also appreciate very much the extension of the work taking into consideration the actual numerical strength of the sentiments provided by SentiStrength. I think this add value to the paper. I still have a few minor comments that I would like you to address before the paper could be published: 1) You introduce the use of the S&P 500 at page 4. I believe most readers will know what it is. Yet, given its importance in your analysis, a few lines of introduction could be useful to those that have poor knowledge about this index. 2) Please re-write your first hypothesis reported at the end of page 5. It is not clear. 3) I really appreciate your clear explanation of what an "event study" is, at page 6. It is really clear now and also justify the small window of time you considered. 4) You should motive more clearly some of the paper's decision, like for example the one related to removing companies involved in mergers and acquisitions (page 7). Why? The president could have also commented on them too! Also, at page 11, you (still) did not justify the use of the Jarque-Bera statistics for a normal distribution. Why this specific statistics and not, for example the Kolmogorov–Smirnov or the Lilliefors tests, which are better known? 5) At page 11, Mike Thelwall surname is misspelled at line 276. In general, please double check the text as you might have added some weird english with your editing (e.g. page 18, like 397-398) 6) Page 13, you added a line to table 1 with the p-value of the Jarque-Bera statistics. Is 0 correct? Explain it. 7) page 19, you removed a sentence on future research on the effects in time of the president tweets. I thought that was very interesting. Why removing it? Also, the very last sentence of the paper reads odd. Written in that way, in my view, it seems to say that you could not establish any lasting effects, while what you wanted to say is that you could not find any experimental support to say that there were lasting effects, although there could have been. The two things are very different and you might want to measure clearly your words when you discuss such "political" topics. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. 4 Feb 2020 PONE-D-19-26269R1 Dear Editor, We very much appreciate the opportunity to revise and resubmit our manuscript to PLOS ONE. We want to thank you and the reviewers for taking the time to review the manuscript and to provide thoughtful and constructive comments. We address each of them and provide detail below. The line numbers in the response to the comments refers to the document with track changes. Thank you very much, Sincerely, # Comment * Response Editor 1 Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. * Thank you very much. We address the points below and hope you feel this results in a manuscript that complies with the publication criteria. 2 In addition to the points raised by the reviewers, we also ask you to better describe how the sentiment analysis (SentiStrength) works and how could one use it the reproduce the results. * Thanks for raising this, we provide more detail about the sentiment analysis in the main text (lines 264-273) and in the supplementary material (S.A). We did not include this in the main text though as we felt it would break the flow of the analysis. 3 We also ask you to clearly acknowledge the limitations of observational studies and causality analysis in the abstract and in the main text. *We discuss the limitations of the study in the abstract (lines 23-25) and in the main text (lines 148-150). 4 A data availability statement also needs to be added in the manuscript (see https://journals.plos.org/plosone/s/data-availability). * We include a data availability statement (lines 454-457) and include an appendix with essential information about the events (Supplementary Material F – Event list ). Reviewer #1 5 I appreciate the replies and modifications the authors provided in response to the comments both by me and by the other reviewer, which I believe have made the manuscript more readable and easy to understand. I still have two comments which I think need to be addressed before the paper is sound for publication. Thank you very much. * We feel your comments have been very helpful. 6 1) While I agree with the authors on the choice of only two days in their event window to avoid confounding events and I think 100 events are sufficient to draw some conclusions, I also think that showing some robustness to sample selection would reinforce the results, given the significance of the tests is not particularly strong. I think the authors didn't respond appropriately to the main issue presented by reviewer #2 in their comment #11 (whom I thank for raising the issue which I didn't notice at first), where it was asked a reason for the choice of limiting their sample to the first two years of presidency instead of considering tweets up to present day. Unless the authors have a sound justification for this choice, I would suggest they perform their analysis on the updated dataset. * Thanks for raising this issue. Reviewer #2 had the same question and (s)he was ok with our response (we assume that our responses to the reviewers’ comments are available to both reviewers). An important reason we cannot expand right now is the lack of funding. We have no research budget to take on the additional highly demanding investigation of the tweets. We want to pursue research on this topic and at a later stage compare the impact during several relevant subperiod of the presidency (e.g. first versus second half; first term versus second term, relate to the business cycle). But such comparison requires more observations which are not available yet. We hope that this paper also helps us position well in the competition for research funding which we require to do the additional research which proves very time intensive. 7 2) At line 250 (page 10) in the revised manuscript I believe w should be the number of stocks with positive CAR, not negative, in order to be consistent with the statements above. *Thanks for pointing this out, you are correct (see also Campbell et al., 2010, page 3081). We amended the text (lines 257-258). Reviewer #2 8 This is the second version of the paper and I see that all my comments have been considered and taken action upon, as far as it was possible, given the available data. I also appreciate very much the extension of the work taking into consideration the actual numerical strength of the sentiments provided by SentiStrength. I think this add value to the paper. I still have a few minor comments that I would like you to address before the paper could be published: * Thank you very much. We feel your comments have been very helpful. 9 1) You introduce the use of the S&P 500 at page 4. I believe most readers will know what it is. Yet, given its importance in your analysis, a few lines of introduction could be useful to those that have poor knowledge about this index. * Thanks for highlighting this. In order to avoid confusion, we now refer to the stock market index in general on page 4 (lines 93-94). We provide more detail about the S&P500 in the (lines 193-194). 10 2) Please re-write your first hypothesis reported at the end of page 5. It is not clear. * We rewrote the first hypothesis (lines 125-126). 11 3) I really appreciate your clear explanation of what an "event study" is, at page 6. It is really clear now and also justify the small window of time you considered. * Thank you very much. We are pleased that you feel the event study is clearly explained and motivated. 12 4) You should motive more clearly some of the paper's decision, like for example the one related to removing companies involved in mergers and acquisitions (page 7). Why? The president could have also commented on them too! Also, at page 11, you (still) did not justify the use of the Jarque-Bera statistics for a normal distribution. Why this specific statistics and not, for example the Kolmogorov–Smirnov or the Lilliefors tests, which are better known? * Thanks for raising this issue. We provide an explanation as to why we control for specific confounding news (lines 168-171). Further, we admit that we don’t have a strong justification for using a specific test to judge the normality of the distribution. Therefore, we decided to leave out the JB-statistic and its probability value (see also comment #14 below) but instead directly refer to the skewness and kurtosis (lines 244-246; 294). 13 5) At page 11, Mike Thelwall surname is misspelled at line 276. In general, please double check the text as you might have added some weird english with your editing (e.g. page 18, like 397-398) * We look into this and amended accordingly (e.g., line 264). We are not sure what you mean by weird English on page 18 (line 397-398) as this is a reference to an article. We did not change the title of the articles in the reference list. 14 6) Page 13, you added a line to table 1 with the p-value of the Jarque-Bera statistics. Is 0 correct? Explain it. * Based on your remark regarding the use of tests for normality, we decided to leave out the JB stat and the probability value (see remark #12 above). 15 7) Page 19, you removed a sentence on future research on the effects in time of the president tweets. I thought that was very interesting. Why removing it? Also, the very last sentence of the paper reads odd. Written in that way, in my view, it seems to say that you could not establish any lasting effects, while what you wanted to say is that you could not find any experimental support to say that there were lasting effects, although there could have been. The two things are very different and you might want to measure clearly your words when you discuss such "political" topics. * We include the future research on the effects in time (lines 392-394). We left out the last sentence (lines 399-400) as we do not want to discuss politics. Submitted filename: PONE-D-19-26269R1 Rebuttal.docx Click here for additional data file. 19 Feb 2020 Under His Thumb The Effect of President Donald Trump’s Twitter Messages on the US Stock Market PONE-D-19-26269R2 Dear Dr. Scholtens, We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements. Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication. Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. With kind regards, Alexandre Bovet, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Please make sure that the additional data (at https://hdl.handle.net/10411/VIPJIN) is openly accessible to anyone reading the article. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I believe the authors properly addressed all the comments raised by both reviewers, thus I recommend to accept the manuscript for publication. Reviewer #2: The paper has been much improved compared to the first version I received. I think the comments of the reviewers were instrumental to that. The authors has addressed all my comments and I believe the paper is now ready to be accepted and published. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 24 Feb 2020 PONE-D-19-26269R2 Under His Thumb The Effect of President Donald Trump’s Twitter Messages on the US Stock Market Dear Dr. Scholtens: I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. For any other questions or concerns, please email plosone@plos.org. Thank you for submitting your work to PLOS ONE. With kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alexandre Bovet Academic Editor PLOS ONE
  2 in total

1.  The Effects of Twitter Sentiment on Stock Price Returns.

Authors:  Gabriele Ranco; Darko Aleksovski; Guido Caldarelli; Miha Grčar; Igor Mozetič
Journal:  PLoS One       Date:  2015-09-21       Impact factor: 3.240

2.  When can social media lead financial markets?

Authors:  Ilya Zheludev; Robert Smith; Tomaso Aste
Journal:  Sci Rep       Date:  2014-02-27       Impact factor: 4.379

  2 in total
  2 in total

1.  Comparing traditional news and social media with stock price movements; which comes first, the news or the price change?

Authors:  Stephen Smith; Anthony O'Hare
Journal:  J Big Data       Date:  2022-04-28

2.  An empirical approach to the "Trump Effect" on US financial markets with causal-impact Bayesian analysis.

Authors:  Pedro Antonio Martín Cervantes; Salvador Cruz Rambaud
Journal:  Heliyon       Date:  2020-08-26
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.