Literature DB >> 33267278

Seasonal Entropy, Diversity and Inequality Measures of Submitted and Accepted Papers Distributions in Peer-Reviewed Journals.

Marcel Ausloos1,2,3, Olgica Nedic4, Aleksandar Dekanski5.   

Abstract

This paper presents a novel method for finding features in the analysis of variable distributions stemming from time series. We apply the methodology to the case of submitted and accepted papers in peer-reviewed journals. We provide a comparative study of editorial decisions for papers submitted to two peer-reviewed journals: the Journal of the Serbian Chemical Society (JSCS) and this MDPI Entropy journal. We cover three recent years for which the fate of submitted papers-about 600 papers to JSCS and 2500 to Entropy-is completely determined. Instead of comparing the number distributions of these papers as a function of time with respect to a uniform distribution, we analyze the relevant probabilities, from which we derive the information entropy. It is argued that such probabilities are indeed more relevant for authors than the actual number of submissions. We tie this entropy analysis to the so called diversity of the variable distributions. Furthermore, we emphasize the correspondence between the entropy and the diversity with inequality measures, like the Herfindahl-Hirschman index and the Theil index, itself being in the class of entropy measures; the Gini coefficient which also measures the diversity in ranking is calculated for further discussion. In this sample, the seasonal aspects of the peer review process are outlined. It is found that the use of such indices, non linear transformations of the data distributions, allow us to distinguish features and evolutions of the peer review process as a function of time as well as comparing the non-uniformity of distributions. Furthermore, t- and z-statistical tests are applied in order to measure the significance (p-level) of the findings, that is, whether papers are more likely to be accepted if they are submitted during a few specific months or during a particular "season"; the predictability strength depends on the journal.

Entities:  

Keywords:  Gini coefficient; Herfindahl-Hirschman index; Theil index; diversity index; peer review; seasons

Year:  2019        PMID: 33267278      PMCID: PMC7515052          DOI: 10.3390/e21060564

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Authors who submit (by their own assumption) high quality papers to scholarly journals, are interested in knowing if there are factors which may increase the probability that their papers be accepted. One such factor may be related to the month or day of submission, as recently discussed [1]. Indeed, authors might wonder about editors’ and reviewers’ overload at some times of the year. Moreover, the number of submitted papers is relevant for editors and publishers handling machines to the point that artificial intelligence can be useful for helping journal editors [2,3]. More generally, informetrics and bibliometrics are also interested in manuscript submission timing, especially in light of an enormous increase in the number of electronic journals. From the author’s point of view, rejection is often frustrating, be it due to an “editor desk rejection” or following a review process. A high editor desk rejection rate has sometimes been explained as due to an entrance barrier editor load effect [4]. Thus, it is of interest to observe whether there is a high probability of submission during specific months or seasons. In fact, non uniform submission has already been studied. However, the acceptance distribution, during a year, that is, a “monthly bias”, is rarely studied, because of publisher secrecy. Search engines do not provide any information at all on the timing of rejected papers. Interestingly, Boja et al. [1] recently examined a large database of journals with high impact factors and reported that a day of the week correlation effect occurs between “when a paper is submitted to a peer-reviewed journal (and) whether that paper is accepted”. However, there was no study of rejected papers because of a lack of data, therefore one may wonder whether, besides a “day of the week” effect, there is some “seasonal” effect. One may indeed imagine that researchers in academic surroundings do not have a constant occupation rate due to teaching classes, holidays, congresses, and even budgetary conditions. Researchers have only specific times during the academic year for producing research papers. From the “seasonal effect” point view, Shalvi et al. [5] found a discrepancy in the pattern of “submission-per-month” and “acceptance-per-month” for Psychological Science () but not for Social Psychology Bulletin (). Summer months inspired authors to submit more papers to but the subsequent acceptance was not related to the effect of seasonal bias (based on a test for percentages). On the other hand, a very low rate of acceptance was recorded for manuscripts sent in November or December. The number of submissions to , on the contrary, was the greatest during winter months, followed by a reduced “production” in April; however, the rate of acceptance was the highest for papers submitted in the period from August to October. Moreover, a significant “acceptance success dip” was noted for submissions made in winter months. One of the main reasons for such differences between journals was conjectured to lie in different rejection policies; some journals employ desk rejection, whereas others do not. Schreiber [4] analysed the acceptance rate of a journal—Europhysics Letters ()—for a period of 12 years and found that the rate of manuscript submission exceeded the rate of their acceptance. The data revealed (Table 2 in [4]) that there is a maximum number of submissions in July, defined as a 10% increase compared to the annual mean, together with a minimum in February, even taking into account the “shorter length” of this month. He concluded that significant fluctuations exist between months. The acceptance rate ranged from 45% to 55%; the highest acceptance rate was seen in July and the lowest in January, in the most recent years. Recently, Ausloos et al. [6] studied submission and also subsequent acceptance data for two journals, a specialized (chemistry) scientific journal and a multidisciplinary journal, respectively, i.e., the Journal of the Serbian Chemical Society (JSCS) (http://shd.org.rs/JSCS/) and Entropy (http://www.mdpi.com/journal/entropy), each over a 3 year time interval. The authors found that fluctuations, expectedly, occur: the number of submissions to is the greatest in July and September and the smallest in May and December. The highest rate of paper submission for was noted in October and December and the lowest in August. Concerning acceptance for , the proportion of accepted/submitted manuscripts is the greatest in January and October. Concerning acceptance for , the number of papers steadily increase from January to a peak in May, followed by a marked dip during summer time, before reaching a peak in October of the order of the May peak. Concerning the number of submitted manuscripts, it was observed that the acceptance rate in was the highest if papers were submitted in January and February; it was significantly lower if the submission occurred in December. In the case of , the highest rejection rate was for papers submitted in December and March, thus with a January-February peak; the lowest acceptance rate was for manuscripts submitted in June or December; the highest rate being for those sent in spring months, February to May. One recognizes a journal-dependent seasonal shift of the features. Notice that we adapt the word “seasonal”; even though changes in seasons occur on the 21st of various months, we approximate the season transition as occurring on the next 1st day of the following month. Here, we propose another line of approach in order to study the submission, acceptance, and rejection (number and rate) diversity based on probabilities, with emphasis on the conditional probabilities, thereafter measuring the entropy and other characteristics of the distributions. Indeed, the entropy is a measure of disorder, and one of several ways to measure diversity. Researchers have their own preference [7,8] in measuring diversity. Here below, we practically adapt the classical measure of diversity, as used in ecology, but other cases of interest pertaining to information science [9,10] can be mentioned. Let us recall that the general equation of diversity is often written in the form [11,12] in which , and the measured variable. For , reduces to the exponential of the Shannon entropy [13,14] to which we will only stick here. Several inequality measures are commonly used in the literature: in the class of entropy related measures, one finds the exponential entropy [15], which measures the extent of a distribution, and the Theil index [16] which emerges as the most popular one [17,18], besides the Herfindahl- Hirschman index [19], measuring “concentrations.” “Finally,” upon ranking according to their size the measured variable, the Gini coefficient [20], is a classical indicator of non-uniform distributions. The Theil index [16] is defined by It seems obvious that the Theil index can be expressed in terms of the negative entropy indicating the deviation from the maximum disorder entropy, , The exponential entropy [15] is The Herfindahl–Hirschman index (HHI) [19] is an indicator of the “concentration” of variables, the “amount of competition” between the months, here. The higher the value of HHI, the smaller the number of months with a large value of (submitted, or accepted, or accepted if submitted) papers in a given month. Formally, adapting the HHI notion to the present case, Notice that . The Gini coefficient [20] has been widely used as a measure of income [21] or wealth inequality [22,23]; nowadays, it is widely used in many other fields. In brief, defining first the Lorenz curve as the percentage contributed by the bottom r of the variable population to the total value of the measured (and now ranked) variable , i.e., , one obtains the Gini coefficient as twice the area between this Lorenz curve and the diagonal line in the plane; such a diagonal represents perfect equality; whence, corresponds to perfect equality of the variables. Having set up the framework and presented the definition of the indices to be calculated, we indicate quantities of interest and turn to the data and data analysis, in Section 2 and Section 3, respectively. Their discussion and comments on the present study, together with a remark on its limitations, are found in the conclusion Section 4.

2. Definitions

In order to develop the method measuring the disorder of the time series, let us recall the necessary data. The raw data can be found in Reference [6]. For completeness, let the time series of submitted and of accepted papers if submitted during a given month to and to be recalled through Figure A1 for the years in which the full data is available, that is, for which the final decisions have been made on the submitted papers.
Figure A1

Number of papers submitted and number of papers accepted if submitted, during a given month, to and to , in the examined 36 months of the 3-year time interval, [2012–2014] and [2014–2016], respectively.

Let us introduce notations: the number of monthly submissions in a given month () in year (y) is called the percentage of this set is the probability of submission in a given month for a specific year similarly, one can define , as being the number of accepted papers when submitted in year (y) in a specific month (m), and for the related percentage, one has ; more importantly, for authors, the (conditional) probability of a paper acceptance when submitted in a given month may be considered and estimated before submission Thereafter, one can deduce the relevant “monthly information entropies” and the overall information entropy: in order to pin point whether the yearly distributions are disordered. Moreover, we can discuss the data by not only comparing different years, but also the cumulated data per month in the examined time interval as if all years are “equivalent”: which will be called the “conditional entropy”. , from which one deduces and similarly for the accepted papers , and leading to the ratio between cumulated monthly data and to the corresponding “monthly cumulated entropy”, , finally to Relevant values are given in Table 1, Table 2, Table 3 and Table 4 both for and for . The diversity and the inequality index values are given in Table 5. Most of the results stem from the use of a free online software [24].
Table 1

Number of papers and monthly percentage of papers submitted in a given year (y) and month (m), respectively to JSCS in 2012, 2013, and 2014, and to in 2014, 2015, and 2016; is obtained after summing the events of each year for a given month, i.e., from ; last lines: and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall that 2.4849 and .

JSCS Entropy
Ns(y) 31732227491360496110082573
qs(m,y) qs(m,y) qs(m,y) qs(m) qs(m,y) qs(m,y) qs(m,y) qs(m)
y= 201220132014[2012–2014]201420152016[2014–2016]
January0.082020.108700.080290.090910.091060.075960.085320.08317
February0.047320.052800.094890.063530.072850.074920.076390.07501
March0.059940.093170.102190.084340.071190.091570.079370.08201
April0.097790.083850.105840.095290.087750.083250.087300.08589
May0.082020.055900.058390.065720.076160.099900.083330.08784
June0.069400.074530.069340.071190.069540.077000.093250.08162
July0.097790.096270.098540.097480.079470.092610.079370.08434
August0.069400.093170.065690.076670.059600.075960.063490.06724
September0.066250.099380.098540.087620.074500.077000.080360.07773
October0.119870.099380.054740.093100.112580.074920.093250.09094
November0.082020.077640.109490.088720.077810.089490.090280.08706
December0.126180.065220.062040.085430.127480.087410.088290.09716
χ2 23.27814.07514.96415.81129.4979.3779.33320.236
entropy2.44872.46202.45692.47602.46212.48012.48012.4809
Mean0.083330.083330.083330.083330.083330.083330.083330.08333
Std Dev0.023590.018200.020340.011450.019230.008600.008370.00772
μ2σ 0.036160.046940.042650.060430.044860.066140.066580.06790
μ+2σ 0.130510.119730.124010.106240.121800.100530.100080.09877
tstat 654.12854.49705.302287.081107.623124.033287.435694.50
signif.(p<) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 2

Number of papers and monthly percentage of papers accepted when submitted in a given year (y) and month (m) respectively to JSCS in 2012, 2013, and 2014, and to in 2014, 2015, and 2016; is obtained after summing the events of each year for a given month, i.e., from ; last lines: and entropy, mean, standard deviation, confidence interval, and t-test with significance level; recall 2.4849, and .

JSCS Entropy
Na(y) 1601461164223364674471250
qa(m,y) qa(m,y) qa(m,y) qa(m) qa(m,y) qa(m,y) qa(m,y) qa(m)
y= 201220132014[2012–2014]201420152016[2014–2016]
January0.112500.123290.120690.118480.095240.085650.069350.08240
February0.056250.068490.103450.073460.071430.089940.078300.08080
March0.056250.054790.094830.066350.089290.098500.080540.08960
April0.068750.054790.146550.085310.092260.085650.098430.09200
May0.075000.061640.051720.063980.092260.119910.080540.09840
June0.056250.068490.077590.066350.047620.072810.093960.07360
July0.093750.075340.112070.092420.092260.079230.071590.08000
August0.050000.075340.068970.063980.059520.057820.067110.06160
September0.081250.116440.094830.097160.053570.085650.069350.07120
October0.143750.136990.051720.116110.130950.077090.091720.09680
November0.087500.109590.043100.082940.077380.077090.114090.09040
December0.118750.054790.034480.073460.098210.070660.085010.08320
χ2 18.20017.06818.27620.80623.428614.824311.765119.5802
entropy2.43052.42912.40422.46122.44962.46952.47222.4769
Mean0.083330.083330.083330.083330.083330.083330.083330.08333
Std Dev0.029350.029760.034550.019330.022980.015510.014120.01089
μ2σ 0.024620.023810.014240.044680.037370.052320.055090.06155
μ+2σ 0.142040.142850.152430.121990.129300.114350.111570.10512
tstat. 373.51351.88270.17921.04691.191207.531297.692813.71
signf.(p<) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 3

Conditional probability of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in a given year (y), and the corresponding cumulated conditional probability ; the sum of such probabilities is given; we also report the here so called “conditional entropy” (), either or . The distribution total (sum), mean, standard deviation, confidence interval, t- and z-test with p-significance level, are also reported.

Month JSCS Entropy
p(a|s)(m,y) q(a|s)(m) p(a|s)(m,y) q(a|s)(m)
201220132014[2012–2014]201420152016[2014–2016]
January0.69230.51430.63640.60240.58180.54790.36050.4813
February0.60000.58820.46150.53450.54550.58330.45450.5233
March0.47370.26670.39290.36360.69770.52270.45000.5308
April0.35480.29630.58620.41380.58490.50000.50000.5204
May0.46150.50000.37500.45000.67390.58330.42860.5442
June0.40910.41670.47370.43080.38100.45950.44680.4381
July0.48390.35480.48150.43820.64580.41570.40000.4608
August0.36360.36670.44440.38570.55560.36990.46870.4451
September0.61900.53120.40740.51250.40000.54050.38270.4450
October0.60530.62500.40000.57650.64710.50000.43620.5171
November0.53850.64000.16670.43210.55320.41860.56040.5045
December0.47500.38100.23530.39740.42860.39290.42700.4160
c.entr. 4.01204.09704.13014.21363.79194.14504.29434.1883
sum6.07675.48095.06105.53756.69515.83435.31545.8266
Mean (μ)0.50640.45670.42170.46150.55790.48620.44290.4856
Std Dev0.10630.12710.12970.07700.10580.07370.05280.0432
μ2σ 0.29390.20260.16240.30750.34630.33870.33730.3992
μ+2σ 0.71890.71090.68110.61540.76950.63370.54860.5719
t-test52.78646.89743.870130.3367.933135.995203.05380.07
z-test0.80340.7580.6731.2681.1981.3471.2912.190
p-level0.42210.44840.50120.20470.23090.17800.19680.0285
Table 4

Monthly information Entropy and (last line) overall information entropy for specific years and for the cumulated data over the relevant time interval for either journal so investigated; on the last lines one gives the so-called “conditional entropy”, , either or , together with each distribution mean, standard deviation, confidence interval, and t-test with significance level.

Month JSCS Entropy
S(a|s)(m,y) S(a|s)(m) S(a|s)(m,y) S(a|s)(m)
201220132014[2012–2014]201420152016[2014–2016]
January0.254580.341990.287630.305310.315110.329630.367800.35196
February0.306500.312130.356860.334830.330620.314410.358390.33888
March0.353940.352470.367050.367850.251160.339090.359330.33619
April0.367650.360410.313080.365130.313690.346570.346570.33992
May0.356860.346570.367810.359330.265960.314410.363130.33109
June0.365650.364780.353940.362790.367650.357320.359960.36157
July0.351260.367650.351910.361550.282370.364890.366520.35702
August0.367850.367880.360410.367450.326550.367870.355170.36029
September0.296880.336030.365830.342580.366520.332530.367580.36031
October0.303900.293750.366520.317540.281680.346570.361900.34104
November0.333330.285620.298630.362570.327520.364530.324510.34518
December0.353610.367650.340450.366720.363130.367050.363370.36486
c.entr. 4.01204.09694.13014.21373.79194.14494.29424.1883
Mean0.334330.341410.344180.351140.31600.345410.357850.34903
Std Dev0.035970.029240.028420.021310.039220.019630.012050.01162
μ2σ 0.262400.282940.287340.308520.237550.306150.333760.32578
μ+2σ 0.406270.399890.401010.393750.394440.384670.381950.37227
tstat. 216.505251.492229.588577.295295.560665.5781060.131828.53
signf.(p<) 0.00010.00010.00010.00010.00010.00010.00010.0001
Table 5

Diversity index, the exponential entropy (), Theil index, Herfindahl–Hirschman index, and Gini coefficient, for specific years and for the cumulated data over the relevant time interval for the submitted, accepted, and accepted if submitted papers, respectively, to both investigated journals

JSCS Entropy
index 201220132014[2012–2014]201420152016[2014–2016]
1D11.57411.72911.66911.89311.73011.94211.94311.952
e.entr. 0.086400.085260.085700.084080.085260.083730.083730.08367
Th 0.036190.022870.027970.008930.022800.004800.004800.00399
HHI0.089450.086980.087880.084780.087400.084150.084100.08399
Gi 0.150630.117490.131390.073290.113690.054020.051920.04861
accepted papers
1D11.36411.34911.06911.71911.58411.81711.84811.904
e.entr. 0.087990.0881140.090340.085330.086330.084630.084400.08401
Th 0.054460.055780.080730.023710.035280.015390.012750.00803
HHI0.092810.093080.096460.087460.089140.085980.085530.08464
Gi 0.186460.189490.225570.121640.143350.094040.089300.07027
accepted papers if submitted in a given month
1D55.25760.15862.18667.60244.34163.11673.27865.912
e.entr. 0.085040.086340.087370.084380.084780.084230.083870.08364
Th 0.020220.036140.047270.012440.017160.010700.006410.00365
HHI0.086700.089240.090560.085460.086080.085090.084420.08394
Gi 0.113550.152110.159650.088200.100830.082640.061890.04808

3. Data Analysis

3.1. Data

First, notice that the 3-year long time series is not in itself part of the main aim of the paper; this is because we intend to compare data with an equivalent number of degrees of freedom, that is, 11, for all studied cases. Nevertheless, for completeness and in order not to distract readers from our framework, we provide relevant figures in the Appendix A, together with a note on the corresponding discrete Fourier transform. A short note, in the Appendix, recalls the meaning of the (p-) significance level.

3.2. Analysis

The relevant values for the various indices, given in Table 1, Table 2, Table 3 and Table 4, both for and for , serve the following analysis. We consider 3 aspects: (i) a posteriori features findings, (ii) non-linear entropy indices, and (iii) forecasting aspects.

3.2.1. A posteriori features findings

Browsing through Table 1, it can be noticed that the distribution of probabilities of submissions is weaker during the February-May months for , but is rather high for the fall and winter months. For , the highest probability of submissions also occurs in October-December, and is preceded by a low rate of submissions, the lowest being in February and in August, should one say at vacation times. Let us recall that the extremum entropy (for “perfect disorder”) is here . Apparently this submission evolution pattern is reflected—see Table 2— in the acceptance rate, except for which has a low acceptance rate for papers submitted in winter 2014. For , the weaker acceptance rate occurs for papers submitted during the August–September months, say the end of summer time. Statistical tests, for example, , can be provided to ensure the validity of these findings for percentages, but taking into account the number of observations. In all cases, such a test demonstrates that the distributions are far from uniform, suggesting looking further for the major deviations. See a discussion of other texts in Section 3.2.3. However, values only measure the probability of monthly acceptances without considering the number of submissions in a given month. It is in this respect more appropriate to look at the conditional probabilities, , as in Table 3. For , the highest values of are found for winter months: has a notable maximum in January and the lowest for spring-summer time, from March till August. There is a shift of such a pattern for : the highest conditional probabilities occur during spring time, except in 2016. The corresponding values of the monthly entropy, for the given years and for the cumulated distributions, are found in Table 4. All values of the entropy are remarkably , both for and , suggesting some sort of universality. One can notice that the entropy steadily increases as a function of time both for and , the growth rate being about twice as large for the latter journal. This is somewhat slightly surprising since one should expect an averaging effect in the case of because of the multidisciplinarity of the topics involved. Comparing such values indicates that the distributions are far from uniform (The slight difference between the last lines of Table 3 and Table 4, displaying the “conditional entropy” is merely due to rounding errors.) indeed.

3.2.2. Non-Linear Entropy Indices

The diversity and inequality measures are given in Table 5. The diversity index is remarkably similar for both journals (∼11) for the submitted papers and accepted papers distributions. The similarity holds also for the HHI , although a little bit lower for the journal . The diversity index for the conditional probability distributions is however rather different: both increase as a function of time, indicating an increase in concentrations in favor of relevant months. This increase rate is much higher for than for . The inequality between months is rather low, as seen in the Gini coefficient; there is a weak inequality between months. However, there is a factor ∼2 in favor of , which we interpret as being due to the greater specificity of , implying a smaller involved community and specially favored topics. This numerical observation reinforces what can be deduced from the Theil index, whence inducing the same conclusion.

3.2.3. Forecasting Aspects

Considering the rather small sizes of both samples (not our fault!), it is of interest to discuss the significance of the findings, in some sense in view of suggesting some “strategy” after the “diagnosis”. The notions of “false positives” and “false negatives”, as in medical testing, can be applied in our framework. In brief, a ”false positive” occurs as an error when a test result improperly indicates the presence (high probability) of an outcome, when in reality it is not present; obviously, a contrario—a “false negative”—is an error in which a test result improperly indicates no presence of a condition (the result is negative), when in reality it is present. This corresponds to rejecting (or accepting) a null hypothesis, for example, in econometrics. Thus, two statistical tests have been used for such a discussion: (i) the Student test and (ii) the z-test. Recall that they are used if one either does not know or one knows the variance (or standard deviation) of the sample and test distributions. Such characteristics are given in Table 1, Table 2, Table 3 and Table 4 for each relevant quantity. For completeness, one has also given the confidence interval []. It is easily seen that there is no outlier. This observation would lead us, like other authors, to claim that there is no anomaly in the monthly numbers and subsequent percentages, in contradistinction with the values and tests. We should here point out that the t-Student test leads to a p-value < 0.0001, a quite significant result. Concentrating our attention on the (monthly and annual) conditional probabilities , the z-test gives the significance reported in Table 4. The values (so called , or error of type I) in hypothesis testing, indicate that the correct conclusion is to reject the null hypothesis and to consider the existence of “false positives”. This is essentially due to the sample size. It is remarkable that the order of magnitude differs for and for .

4. Conclusions

The data on the number of submitted papers is relevant for editors and, more so nowadays, for publishers due to the automatic handling of papers. The relative number of accepted papers is less significant in that respect, but the conditional probability of having an accepted paper if it is submitted in a given month is very relevant for authors. Authors expect a fast and (hopefully) positive response from journals as they are probably interested to discover the best timing for their submission in order to avoid possible editor overload and a negative effect in a particular moment. For these authors, the possible seasonal bias issue is expected to be relevant, as they would like to know whether a specific month of submission will increase the chance that their paper will be accepted. Thus, the probability of acceptance, the so called “acceptance rate,” is the relevant variable to be studied. Instead of tests or observing the “confidence interval” on monthly distributions, we have proposed a new line of approach: considering the diversity and inequality in the distributions of papers submitted, accepted, or accepted if submitted in a given month through information indices, like the Shannon entropy [25], the diversity index, the Gini coefficients and the Herfindahl–Hirschman index. From these case studies, a seasonal bias seems stronger in the specialized () journal. The features are emphasized because we use a non linear transformation of the data, through information concepts, having their usefulness demonstrated in many other fields [26]. In the present cases, the seasonal bias effects are observed. The overall significance and the universality features might have to be re-examined if more data were available. Indeed, the p-values (so-called , or error of type I) in hypothesis testing, indicate that the correct conclusion is to consider the existence of “false positives”. Our outlined findings suggest intrinsic behavioral hypotheses for future research. Complementary aspects must be used as ingredients in order to understand whether some seasonal bias occurs [27,28]. One has to take into account the scientific work environment, besides the journal favored topics.
Table A1

The two largest amplitudes of frequency f in , or (periods), resulting from a Fourier analysis of the 3-year time series for papers submitted or accepted if submitted during a given month to and , as displayed in Figure A1.

JSCS Entropy
Ns f Na f Ns f Na f f
1125.420.3333 (3)66.830.0833 (12)720.230.0278 (36)169.360.0556 (18)
294.940.3889 (2.57)51.110.3333 (3)378.380.0833 (12)164.150.0833 (12)
  2 in total

1.  Review time in peer review: quantitative analysis and modelling of editorial workflows.

Authors:  Maciej J Mrowinski; Agata Fronczak; Piotr Fronczak; Olgica Nedic; Marcel Ausloos
Journal:  Scientometrics       Date:  2016-02-09       Impact factor: 3.238

2.  Artificial intelligence in peer review: How can evolutionary computation support journal editors?

Authors:  Maciej J Mrowinski; Piotr Fronczak; Agata Fronczak; Marcel Ausloos; Olgica Nedic
Journal:  PLoS One       Date:  2017-09-20       Impact factor: 3.240

  2 in total
  1 in total

1.  Getting a head start: turn-of-the-month submission effect for accepted papers in management journals.

Authors:  Liang Meng; Haifeng Wang; Pengfei Han
Journal:  Scientometrics       Date:  2020-06-25       Impact factor: 3.238

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.